Test Report: KVM_Linux_crio 21409

                    
                      2aa028e6c9ae4a79883616b371bbf57b9811dc19:2025-10-14:41906
                    
                

Test fail (6/324)

x
+
TestAddons/parallel/Ingress (161.56s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-082251 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-082251 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-082251 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:352: "nginx" [3734bf2f-d6f3-4fc4-a164-aa3a5ecee661] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx" [3734bf2f-d6f3-4fc4-a164-aa3a5ecee661] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 13.004184072s
I1014 19:14:12.206047  368634 kapi.go:150] Service nginx in namespace default found.
addons_test.go:264: (dbg) Run:  out/minikube-linux-amd64 -p addons-082251 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:264: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-082251 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m15.80467399s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:280: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:288: (dbg) Run:  kubectl --context addons-082251 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-amd64 -p addons-082251 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.39.214
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/Ingress]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-082251 -n addons-082251
helpers_test.go:252: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p addons-082251 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p addons-082251 logs -n 25: (1.336870592s)
helpers_test.go:260: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                                ARGS                                                                                                                                                                                                                                                │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ delete  │ -p download-only-480467                                                                                                                                                                                                                                                                                                                                                                                                                                                                            │ download-only-480467 │ jenkins │ v1.37.0 │ 14 Oct 25 19:10 UTC │ 14 Oct 25 19:10 UTC │
	│ start   │ --download-only -p binary-mirror-462626 --alsologtostderr --binary-mirror http://127.0.0.1:42741 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                                                                                                                                                                                                                                                                                                               │ binary-mirror-462626 │ jenkins │ v1.37.0 │ 14 Oct 25 19:10 UTC │                     │
	│ delete  │ -p binary-mirror-462626                                                                                                                                                                                                                                                                                                                                                                                                                                                                            │ binary-mirror-462626 │ jenkins │ v1.37.0 │ 14 Oct 25 19:10 UTC │ 14 Oct 25 19:10 UTC │
	│ addons  │ disable dashboard -p addons-082251                                                                                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-082251        │ jenkins │ v1.37.0 │ 14 Oct 25 19:10 UTC │                     │
	│ addons  │ enable dashboard -p addons-082251                                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-082251        │ jenkins │ v1.37.0 │ 14 Oct 25 19:10 UTC │                     │
	│ start   │ -p addons-082251 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-082251        │ jenkins │ v1.37.0 │ 14 Oct 25 19:10 UTC │ 14 Oct 25 19:13 UTC │
	│ addons  │ addons-082251 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-082251        │ jenkins │ v1.37.0 │ 14 Oct 25 19:13 UTC │ 14 Oct 25 19:13 UTC │
	│ addons  │ addons-082251 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-082251        │ jenkins │ v1.37.0 │ 14 Oct 25 19:13 UTC │ 14 Oct 25 19:13 UTC │
	│ addons  │ enable headlamp -p addons-082251 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                            │ addons-082251        │ jenkins │ v1.37.0 │ 14 Oct 25 19:13 UTC │ 14 Oct 25 19:13 UTC │
	│ addons  │ addons-082251 addons disable metrics-server --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-082251        │ jenkins │ v1.37.0 │ 14 Oct 25 19:13 UTC │ 14 Oct 25 19:13 UTC │
	│ addons  │ addons-082251 addons disable nvidia-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                           │ addons-082251        │ jenkins │ v1.37.0 │ 14 Oct 25 19:13 UTC │ 14 Oct 25 19:13 UTC │
	│ addons  │ addons-082251 addons disable cloud-spanner --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-082251        │ jenkins │ v1.37.0 │ 14 Oct 25 19:13 UTC │ 14 Oct 25 19:13 UTC │
	│ addons  │ addons-082251 addons disable headlamp --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-082251        │ jenkins │ v1.37.0 │ 14 Oct 25 19:14 UTC │ 14 Oct 25 19:14 UTC │
	│ ip      │ addons-082251 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                                                   │ addons-082251        │ jenkins │ v1.37.0 │ 14 Oct 25 19:14 UTC │ 14 Oct 25 19:14 UTC │
	│ addons  │ addons-082251 addons disable registry --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-082251        │ jenkins │ v1.37.0 │ 14 Oct 25 19:14 UTC │ 14 Oct 25 19:14 UTC │
	│ addons  │ addons-082251 addons disable inspektor-gadget --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                               │ addons-082251        │ jenkins │ v1.37.0 │ 14 Oct 25 19:14 UTC │ 14 Oct 25 19:14 UTC │
	│ addons  │ configure registry-creds -f ./testdata/addons_testconfig.json -p addons-082251                                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-082251        │ jenkins │ v1.37.0 │ 14 Oct 25 19:14 UTC │ 14 Oct 25 19:14 UTC │
	│ ssh     │ addons-082251 ssh cat /opt/local-path-provisioner/pvc-499359fe-7f53-4caf-9df4-794032febc47_default_test-pvc/file1                                                                                                                                                                                                                                                                                                                                                                                  │ addons-082251        │ jenkins │ v1.37.0 │ 14 Oct 25 19:14 UTC │ 14 Oct 25 19:14 UTC │
	│ addons  │ addons-082251 addons disable registry-creds --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-082251        │ jenkins │ v1.37.0 │ 14 Oct 25 19:14 UTC │ 14 Oct 25 19:14 UTC │
	│ addons  │ addons-082251 addons disable storage-provisioner-rancher --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                    │ addons-082251        │ jenkins │ v1.37.0 │ 14 Oct 25 19:14 UTC │ 14 Oct 25 19:14 UTC │
	│ ssh     │ addons-082251 ssh curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'                                                                                                                                                                                                                                                                                                                                                                                                                           │ addons-082251        │ jenkins │ v1.37.0 │ 14 Oct 25 19:14 UTC │                     │
	│ addons  │ addons-082251 addons disable yakd --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                           │ addons-082251        │ jenkins │ v1.37.0 │ 14 Oct 25 19:14 UTC │ 14 Oct 25 19:14 UTC │
	│ addons  │ addons-082251 addons disable volumesnapshots --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                │ addons-082251        │ jenkins │ v1.37.0 │ 14 Oct 25 19:14 UTC │ 14 Oct 25 19:14 UTC │
	│ addons  │ addons-082251 addons disable csi-hostpath-driver --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                            │ addons-082251        │ jenkins │ v1.37.0 │ 14 Oct 25 19:14 UTC │ 14 Oct 25 19:15 UTC │
	│ ip      │ addons-082251 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                                                   │ addons-082251        │ jenkins │ v1.37.0 │ 14 Oct 25 19:16 UTC │ 14 Oct 25 19:16 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/14 19:10:50
	Running on machine: ubuntu-20-agent-6
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1014 19:10:50.538977  369324 out.go:360] Setting OutFile to fd 1 ...
	I1014 19:10:50.539272  369324 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1014 19:10:50.539283  369324 out.go:374] Setting ErrFile to fd 2...
	I1014 19:10:50.539287  369324 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1014 19:10:50.539496  369324 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-364627/.minikube/bin
	I1014 19:10:50.540028  369324 out.go:368] Setting JSON to false
	I1014 19:10:50.541052  369324 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":3194,"bootTime":1760465857,"procs":191,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1014 19:10:50.541149  369324 start.go:141] virtualization: kvm guest
	I1014 19:10:50.542781  369324 out.go:179] * [addons-082251] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1014 19:10:50.544105  369324 out.go:179]   - MINIKUBE_LOCATION=21409
	I1014 19:10:50.544150  369324 notify.go:220] Checking for updates...
	I1014 19:10:50.546445  369324 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1014 19:10:50.547538  369324 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21409-364627/kubeconfig
	I1014 19:10:50.548693  369324 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21409-364627/.minikube
	I1014 19:10:50.549815  369324 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1014 19:10:50.551041  369324 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1014 19:10:50.552187  369324 driver.go:421] Setting default libvirt URI to qemu:///system
	I1014 19:10:50.582996  369324 out.go:179] * Using the kvm2 driver based on user configuration
	I1014 19:10:50.584245  369324 start.go:305] selected driver: kvm2
	I1014 19:10:50.584271  369324 start.go:925] validating driver "kvm2" against <nil>
	I1014 19:10:50.584290  369324 start.go:936] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1014 19:10:50.585152  369324 install.go:66] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1014 19:10:50.585303  369324 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21409-364627/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1014 19:10:50.599797  369324 install.go:163] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I1014 19:10:50.599833  369324 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21409-364627/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1014 19:10:50.614704  369324 install.go:163] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I1014 19:10:50.614755  369324 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1014 19:10:50.615075  369324 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1014 19:10:50.615106  369324 cni.go:84] Creating CNI manager for ""
	I1014 19:10:50.615162  369324 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1014 19:10:50.615175  369324 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1014 19:10:50.615236  369324 start.go:349] cluster config:
	{Name:addons-082251 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-082251 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: A
utoPauseInterval:1m0s}
	I1014 19:10:50.615419  369324 iso.go:125] acquiring lock: {Name:mk340b9c7aeea9c9c1eaad0b6560c7e40df5ab00 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1014 19:10:50.617798  369324 out.go:179] * Starting "addons-082251" primary control-plane node in "addons-082251" cluster
	I1014 19:10:50.619096  369324 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1014 19:10:50.619150  369324 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21409-364627/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1014 19:10:50.619165  369324 cache.go:58] Caching tarball of preloaded images
	I1014 19:10:50.619293  369324 preload.go:233] Found /home/jenkins/minikube-integration/21409-364627/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1014 19:10:50.619306  369324 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1014 19:10:50.619723  369324 profile.go:143] Saving config to /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/addons-082251/config.json ...
	I1014 19:10:50.619755  369324 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/addons-082251/config.json: {Name:mkbc2bf55e6d08a0574c5492924ab62632b15644 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 19:10:50.619941  369324 start.go:360] acquireMachinesLock for addons-082251: {Name:mk52d449be3ec71c122454fdb0aeda759b1051fc Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1014 19:10:50.620014  369324 start.go:364] duration metric: took 54.635µs to acquireMachinesLock for "addons-082251"
	I1014 19:10:50.620043  369324 start.go:93] Provisioning new machine with config: &{Name:addons-082251 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.1 ClusterName:addons-082251 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1014 19:10:50.620114  369324 start.go:125] createHost starting for "" (driver="kvm2")
	I1014 19:10:50.622282  369324 out.go:252] * Creating kvm2 VM (CPUs=2, Memory=4096MB, Disk=20000MB) ...
	I1014 19:10:50.622475  369324 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 19:10:50.622532  369324 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 19:10:50.636354  369324 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45075
	I1014 19:10:50.636831  369324 main.go:141] libmachine: () Calling .GetVersion
	I1014 19:10:50.637422  369324 main.go:141] libmachine: Using API Version  1
	I1014 19:10:50.637444  369324 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 19:10:50.637836  369324 main.go:141] libmachine: () Calling .GetMachineName
	I1014 19:10:50.638077  369324 main.go:141] libmachine: (addons-082251) Calling .GetMachineName
	I1014 19:10:50.638211  369324 main.go:141] libmachine: (addons-082251) Calling .DriverName
	I1014 19:10:50.638413  369324 start.go:159] libmachine.API.Create for "addons-082251" (driver="kvm2")
	I1014 19:10:50.638466  369324 client.go:168] LocalClient.Create starting
	I1014 19:10:50.638508  369324 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/21409-364627/.minikube/certs/ca.pem
	I1014 19:10:50.709518  369324 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21409-364627/.minikube/certs/cert.pem
	I1014 19:10:50.864424  369324 main.go:141] libmachine: Running pre-create checks...
	I1014 19:10:50.864451  369324 main.go:141] libmachine: (addons-082251) Calling .PreCreateCheck
	I1014 19:10:50.864975  369324 main.go:141] libmachine: (addons-082251) Calling .GetConfigRaw
	I1014 19:10:50.865474  369324 main.go:141] libmachine: Creating machine...
	I1014 19:10:50.865492  369324 main.go:141] libmachine: (addons-082251) Calling .Create
	I1014 19:10:50.865655  369324 main.go:141] libmachine: (addons-082251) creating domain...
	I1014 19:10:50.865674  369324 main.go:141] libmachine: (addons-082251) creating network...
	I1014 19:10:50.867069  369324 main.go:141] libmachine: (addons-082251) DBG | found existing default network
	I1014 19:10:50.867352  369324 main.go:141] libmachine: (addons-082251) DBG | <network>
	I1014 19:10:50.867372  369324 main.go:141] libmachine: (addons-082251) DBG |   <name>default</name>
	I1014 19:10:50.867385  369324 main.go:141] libmachine: (addons-082251) DBG |   <uuid>c61344c2-dba2-46dd-a21a-34776d235985</uuid>
	I1014 19:10:50.867402  369324 main.go:141] libmachine: (addons-082251) DBG |   <forward mode='nat'>
	I1014 19:10:50.867412  369324 main.go:141] libmachine: (addons-082251) DBG |     <nat>
	I1014 19:10:50.867419  369324 main.go:141] libmachine: (addons-082251) DBG |       <port start='1024' end='65535'/>
	I1014 19:10:50.867425  369324 main.go:141] libmachine: (addons-082251) DBG |     </nat>
	I1014 19:10:50.867430  369324 main.go:141] libmachine: (addons-082251) DBG |   </forward>
	I1014 19:10:50.867448  369324 main.go:141] libmachine: (addons-082251) DBG |   <bridge name='virbr0' stp='on' delay='0'/>
	I1014 19:10:50.867457  369324 main.go:141] libmachine: (addons-082251) DBG |   <mac address='52:54:00:10:a2:1d'/>
	I1014 19:10:50.867465  369324 main.go:141] libmachine: (addons-082251) DBG |   <ip address='192.168.122.1' netmask='255.255.255.0'>
	I1014 19:10:50.867477  369324 main.go:141] libmachine: (addons-082251) DBG |     <dhcp>
	I1014 19:10:50.867498  369324 main.go:141] libmachine: (addons-082251) DBG |       <range start='192.168.122.2' end='192.168.122.254'/>
	I1014 19:10:50.867517  369324 main.go:141] libmachine: (addons-082251) DBG |     </dhcp>
	I1014 19:10:50.867526  369324 main.go:141] libmachine: (addons-082251) DBG |   </ip>
	I1014 19:10:50.867530  369324 main.go:141] libmachine: (addons-082251) DBG | </network>
	I1014 19:10:50.867538  369324 main.go:141] libmachine: (addons-082251) DBG | 
	I1014 19:10:50.868219  369324 main.go:141] libmachine: (addons-082251) DBG | I1014 19:10:50.867992  369352 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0000136c0}
	I1014 19:10:50.868262  369324 main.go:141] libmachine: (addons-082251) DBG | defining private network:
	I1014 19:10:50.868282  369324 main.go:141] libmachine: (addons-082251) DBG | 
	I1014 19:10:50.868299  369324 main.go:141] libmachine: (addons-082251) DBG | <network>
	I1014 19:10:50.868319  369324 main.go:141] libmachine: (addons-082251) DBG |   <name>mk-addons-082251</name>
	I1014 19:10:50.868324  369324 main.go:141] libmachine: (addons-082251) DBG |   <dns enable='no'/>
	I1014 19:10:50.868336  369324 main.go:141] libmachine: (addons-082251) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I1014 19:10:50.868344  369324 main.go:141] libmachine: (addons-082251) DBG |     <dhcp>
	I1014 19:10:50.868350  369324 main.go:141] libmachine: (addons-082251) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I1014 19:10:50.868354  369324 main.go:141] libmachine: (addons-082251) DBG |     </dhcp>
	I1014 19:10:50.868359  369324 main.go:141] libmachine: (addons-082251) DBG |   </ip>
	I1014 19:10:50.868364  369324 main.go:141] libmachine: (addons-082251) DBG | </network>
	I1014 19:10:50.868370  369324 main.go:141] libmachine: (addons-082251) DBG | 
	I1014 19:10:50.874603  369324 main.go:141] libmachine: (addons-082251) DBG | creating private network mk-addons-082251 192.168.39.0/24...
	I1014 19:10:50.941359  369324 main.go:141] libmachine: (addons-082251) DBG | private network mk-addons-082251 192.168.39.0/24 created
	I1014 19:10:50.941614  369324 main.go:141] libmachine: (addons-082251) DBG | <network>
	I1014 19:10:50.941645  369324 main.go:141] libmachine: (addons-082251) setting up store path in /home/jenkins/minikube-integration/21409-364627/.minikube/machines/addons-082251 ...
	I1014 19:10:50.941654  369324 main.go:141] libmachine: (addons-082251) DBG |   <name>mk-addons-082251</name>
	I1014 19:10:50.941670  369324 main.go:141] libmachine: (addons-082251) DBG |   <uuid>c6f0a5c8-1e02-4c03-9f8a-e5eebf2b878f</uuid>
	I1014 19:10:50.941682  369324 main.go:141] libmachine: (addons-082251) DBG |   <bridge name='virbr1' stp='on' delay='0'/>
	I1014 19:10:50.941695  369324 main.go:141] libmachine: (addons-082251) DBG |   <mac address='52:54:00:84:87:37'/>
	I1014 19:10:50.941704  369324 main.go:141] libmachine: (addons-082251) DBG |   <dns enable='no'/>
	I1014 19:10:50.941737  369324 main.go:141] libmachine: (addons-082251) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I1014 19:10:50.941753  369324 main.go:141] libmachine: (addons-082251) DBG |     <dhcp>
	I1014 19:10:50.941768  369324 main.go:141] libmachine: (addons-082251) building disk image from file:///home/jenkins/minikube-integration/21409-364627/.minikube/cache/iso/amd64/minikube-v1.37.0-1758198818-20370-amd64.iso
	I1014 19:10:50.941795  369324 main.go:141] libmachine: (addons-082251) Downloading /home/jenkins/minikube-integration/21409-364627/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/21409-364627/.minikube/cache/iso/amd64/minikube-v1.37.0-1758198818-20370-amd64.iso...
	I1014 19:10:50.941813  369324 main.go:141] libmachine: (addons-082251) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I1014 19:10:50.941825  369324 main.go:141] libmachine: (addons-082251) DBG |     </dhcp>
	I1014 19:10:50.941831  369324 main.go:141] libmachine: (addons-082251) DBG |   </ip>
	I1014 19:10:50.941839  369324 main.go:141] libmachine: (addons-082251) DBG | </network>
	I1014 19:10:50.941850  369324 main.go:141] libmachine: (addons-082251) DBG | 
	I1014 19:10:50.941866  369324 main.go:141] libmachine: (addons-082251) DBG | I1014 19:10:50.941595  369352 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/21409-364627/.minikube
	I1014 19:10:51.217961  369324 main.go:141] libmachine: (addons-082251) DBG | I1014 19:10:51.217822  369352 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/21409-364627/.minikube/machines/addons-082251/id_rsa...
	I1014 19:10:51.579173  369324 main.go:141] libmachine: (addons-082251) DBG | I1014 19:10:51.579022  369352 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/21409-364627/.minikube/machines/addons-082251/addons-082251.rawdisk...
	I1014 19:10:51.579231  369324 main.go:141] libmachine: (addons-082251) DBG | Writing magic tar header
	I1014 19:10:51.579249  369324 main.go:141] libmachine: (addons-082251) DBG | Writing SSH key tar header
	I1014 19:10:51.579266  369324 main.go:141] libmachine: (addons-082251) DBG | I1014 19:10:51.579152  369352 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/21409-364627/.minikube/machines/addons-082251 ...
	I1014 19:10:51.579283  369324 main.go:141] libmachine: (addons-082251) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21409-364627/.minikube/machines/addons-082251
	I1014 19:10:51.579297  369324 main.go:141] libmachine: (addons-082251) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21409-364627/.minikube/machines
	I1014 19:10:51.579328  369324 main.go:141] libmachine: (addons-082251) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21409-364627/.minikube
	I1014 19:10:51.579341  369324 main.go:141] libmachine: (addons-082251) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21409-364627
	I1014 19:10:51.579363  369324 main.go:141] libmachine: (addons-082251) DBG | checking permissions on dir: /home/jenkins/minikube-integration
	I1014 19:10:51.579382  369324 main.go:141] libmachine: (addons-082251) setting executable bit set on /home/jenkins/minikube-integration/21409-364627/.minikube/machines/addons-082251 (perms=drwx------)
	I1014 19:10:51.579389  369324 main.go:141] libmachine: (addons-082251) DBG | checking permissions on dir: /home/jenkins
	I1014 19:10:51.579397  369324 main.go:141] libmachine: (addons-082251) DBG | checking permissions on dir: /home
	I1014 19:10:51.579402  369324 main.go:141] libmachine: (addons-082251) DBG | skipping /home - not owner
	I1014 19:10:51.579420  369324 main.go:141] libmachine: (addons-082251) setting executable bit set on /home/jenkins/minikube-integration/21409-364627/.minikube/machines (perms=drwxr-xr-x)
	I1014 19:10:51.579437  369324 main.go:141] libmachine: (addons-082251) setting executable bit set on /home/jenkins/minikube-integration/21409-364627/.minikube (perms=drwxr-xr-x)
	I1014 19:10:51.579450  369324 main.go:141] libmachine: (addons-082251) setting executable bit set on /home/jenkins/minikube-integration/21409-364627 (perms=drwxrwxr-x)
	I1014 19:10:51.579475  369324 main.go:141] libmachine: (addons-082251) setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1014 19:10:51.579489  369324 main.go:141] libmachine: (addons-082251) setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1014 19:10:51.579498  369324 main.go:141] libmachine: (addons-082251) defining domain...
	I1014 19:10:51.580589  369324 main.go:141] libmachine: (addons-082251) defining domain using XML: 
	I1014 19:10:51.580612  369324 main.go:141] libmachine: (addons-082251) <domain type='kvm'>
	I1014 19:10:51.580618  369324 main.go:141] libmachine: (addons-082251)   <name>addons-082251</name>
	I1014 19:10:51.580623  369324 main.go:141] libmachine: (addons-082251)   <memory unit='MiB'>4096</memory>
	I1014 19:10:51.580630  369324 main.go:141] libmachine: (addons-082251)   <vcpu>2</vcpu>
	I1014 19:10:51.580641  369324 main.go:141] libmachine: (addons-082251)   <features>
	I1014 19:10:51.580666  369324 main.go:141] libmachine: (addons-082251)     <acpi/>
	I1014 19:10:51.580679  369324 main.go:141] libmachine: (addons-082251)     <apic/>
	I1014 19:10:51.580685  369324 main.go:141] libmachine: (addons-082251)     <pae/>
	I1014 19:10:51.580689  369324 main.go:141] libmachine: (addons-082251)   </features>
	I1014 19:10:51.580737  369324 main.go:141] libmachine: (addons-082251)   <cpu mode='host-passthrough'>
	I1014 19:10:51.580756  369324 main.go:141] libmachine: (addons-082251)   </cpu>
	I1014 19:10:51.580762  369324 main.go:141] libmachine: (addons-082251)   <os>
	I1014 19:10:51.580770  369324 main.go:141] libmachine: (addons-082251)     <type>hvm</type>
	I1014 19:10:51.580775  369324 main.go:141] libmachine: (addons-082251)     <boot dev='cdrom'/>
	I1014 19:10:51.580781  369324 main.go:141] libmachine: (addons-082251)     <boot dev='hd'/>
	I1014 19:10:51.580786  369324 main.go:141] libmachine: (addons-082251)     <bootmenu enable='no'/>
	I1014 19:10:51.580790  369324 main.go:141] libmachine: (addons-082251)   </os>
	I1014 19:10:51.580796  369324 main.go:141] libmachine: (addons-082251)   <devices>
	I1014 19:10:51.580801  369324 main.go:141] libmachine: (addons-082251)     <disk type='file' device='cdrom'>
	I1014 19:10:51.580811  369324 main.go:141] libmachine: (addons-082251)       <source file='/home/jenkins/minikube-integration/21409-364627/.minikube/machines/addons-082251/boot2docker.iso'/>
	I1014 19:10:51.580818  369324 main.go:141] libmachine: (addons-082251)       <target dev='hdc' bus='scsi'/>
	I1014 19:10:51.580823  369324 main.go:141] libmachine: (addons-082251)       <readonly/>
	I1014 19:10:51.580829  369324 main.go:141] libmachine: (addons-082251)     </disk>
	I1014 19:10:51.580835  369324 main.go:141] libmachine: (addons-082251)     <disk type='file' device='disk'>
	I1014 19:10:51.580844  369324 main.go:141] libmachine: (addons-082251)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1014 19:10:51.580855  369324 main.go:141] libmachine: (addons-082251)       <source file='/home/jenkins/minikube-integration/21409-364627/.minikube/machines/addons-082251/addons-082251.rawdisk'/>
	I1014 19:10:51.580861  369324 main.go:141] libmachine: (addons-082251)       <target dev='hda' bus='virtio'/>
	I1014 19:10:51.580866  369324 main.go:141] libmachine: (addons-082251)     </disk>
	I1014 19:10:51.580870  369324 main.go:141] libmachine: (addons-082251)     <interface type='network'>
	I1014 19:10:51.580878  369324 main.go:141] libmachine: (addons-082251)       <source network='mk-addons-082251'/>
	I1014 19:10:51.580882  369324 main.go:141] libmachine: (addons-082251)       <model type='virtio'/>
	I1014 19:10:51.580889  369324 main.go:141] libmachine: (addons-082251)     </interface>
	I1014 19:10:51.580894  369324 main.go:141] libmachine: (addons-082251)     <interface type='network'>
	I1014 19:10:51.580901  369324 main.go:141] libmachine: (addons-082251)       <source network='default'/>
	I1014 19:10:51.580905  369324 main.go:141] libmachine: (addons-082251)       <model type='virtio'/>
	I1014 19:10:51.580932  369324 main.go:141] libmachine: (addons-082251)     </interface>
	I1014 19:10:51.580952  369324 main.go:141] libmachine: (addons-082251)     <serial type='pty'>
	I1014 19:10:51.580965  369324 main.go:141] libmachine: (addons-082251)       <target port='0'/>
	I1014 19:10:51.580978  369324 main.go:141] libmachine: (addons-082251)     </serial>
	I1014 19:10:51.580996  369324 main.go:141] libmachine: (addons-082251)     <console type='pty'>
	I1014 19:10:51.581013  369324 main.go:141] libmachine: (addons-082251)       <target type='serial' port='0'/>
	I1014 19:10:51.581024  369324 main.go:141] libmachine: (addons-082251)     </console>
	I1014 19:10:51.581033  369324 main.go:141] libmachine: (addons-082251)     <rng model='virtio'>
	I1014 19:10:51.581044  369324 main.go:141] libmachine: (addons-082251)       <backend model='random'>/dev/random</backend>
	I1014 19:10:51.581050  369324 main.go:141] libmachine: (addons-082251)     </rng>
	I1014 19:10:51.581055  369324 main.go:141] libmachine: (addons-082251)   </devices>
	I1014 19:10:51.581059  369324 main.go:141] libmachine: (addons-082251) </domain>
	I1014 19:10:51.581068  369324 main.go:141] libmachine: (addons-082251) 
	I1014 19:10:51.655252  369324 main.go:141] libmachine: (addons-082251) DBG | domain addons-082251 has defined MAC address 52:54:00:32:f7:70 in network default
	I1014 19:10:51.655786  369324 main.go:141] libmachine: (addons-082251) starting domain...
	I1014 19:10:51.655799  369324 main.go:141] libmachine: (addons-082251) ensuring networks are active...
	I1014 19:10:51.655807  369324 main.go:141] libmachine: (addons-082251) DBG | domain addons-082251 has defined MAC address 52:54:00:84:65:a3 in network mk-addons-082251
	I1014 19:10:51.656437  369324 main.go:141] libmachine: (addons-082251) Ensuring network default is active
	I1014 19:10:51.656780  369324 main.go:141] libmachine: (addons-082251) Ensuring network mk-addons-082251 is active
	I1014 19:10:51.657606  369324 main.go:141] libmachine: (addons-082251) getting domain XML...
	I1014 19:10:51.658542  369324 main.go:141] libmachine: (addons-082251) DBG | starting domain XML:
	I1014 19:10:51.658576  369324 main.go:141] libmachine: (addons-082251) DBG | <domain type='kvm'>
	I1014 19:10:51.658586  369324 main.go:141] libmachine: (addons-082251) DBG |   <name>addons-082251</name>
	I1014 19:10:51.658594  369324 main.go:141] libmachine: (addons-082251) DBG |   <uuid>370e40dd-634a-4fc2-84c5-54395dc3dfe0</uuid>
	I1014 19:10:51.658620  369324 main.go:141] libmachine: (addons-082251) DBG |   <memory unit='KiB'>4194304</memory>
	I1014 19:10:51.658634  369324 main.go:141] libmachine: (addons-082251) DBG |   <currentMemory unit='KiB'>4194304</currentMemory>
	I1014 19:10:51.658642  369324 main.go:141] libmachine: (addons-082251) DBG |   <vcpu placement='static'>2</vcpu>
	I1014 19:10:51.658649  369324 main.go:141] libmachine: (addons-082251) DBG |   <os>
	I1014 19:10:51.658657  369324 main.go:141] libmachine: (addons-082251) DBG |     <type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
	I1014 19:10:51.658671  369324 main.go:141] libmachine: (addons-082251) DBG |     <boot dev='cdrom'/>
	I1014 19:10:51.658682  369324 main.go:141] libmachine: (addons-082251) DBG |     <boot dev='hd'/>
	I1014 19:10:51.658692  369324 main.go:141] libmachine: (addons-082251) DBG |     <bootmenu enable='no'/>
	I1014 19:10:51.658698  369324 main.go:141] libmachine: (addons-082251) DBG |   </os>
	I1014 19:10:51.658705  369324 main.go:141] libmachine: (addons-082251) DBG |   <features>
	I1014 19:10:51.658710  369324 main.go:141] libmachine: (addons-082251) DBG |     <acpi/>
	I1014 19:10:51.658714  369324 main.go:141] libmachine: (addons-082251) DBG |     <apic/>
	I1014 19:10:51.658719  369324 main.go:141] libmachine: (addons-082251) DBG |     <pae/>
	I1014 19:10:51.658731  369324 main.go:141] libmachine: (addons-082251) DBG |   </features>
	I1014 19:10:51.658762  369324 main.go:141] libmachine: (addons-082251) DBG |   <cpu mode='host-passthrough' check='none' migratable='on'/>
	I1014 19:10:51.658785  369324 main.go:141] libmachine: (addons-082251) DBG |   <clock offset='utc'/>
	I1014 19:10:51.658796  369324 main.go:141] libmachine: (addons-082251) DBG |   <on_poweroff>destroy</on_poweroff>
	I1014 19:10:51.658820  369324 main.go:141] libmachine: (addons-082251) DBG |   <on_reboot>restart</on_reboot>
	I1014 19:10:51.658834  369324 main.go:141] libmachine: (addons-082251) DBG |   <on_crash>destroy</on_crash>
	I1014 19:10:51.658843  369324 main.go:141] libmachine: (addons-082251) DBG |   <devices>
	I1014 19:10:51.658875  369324 main.go:141] libmachine: (addons-082251) DBG |     <emulator>/usr/bin/qemu-system-x86_64</emulator>
	I1014 19:10:51.658900  369324 main.go:141] libmachine: (addons-082251) DBG |     <disk type='file' device='cdrom'>
	I1014 19:10:51.658912  369324 main.go:141] libmachine: (addons-082251) DBG |       <driver name='qemu' type='raw'/>
	I1014 19:10:51.658935  369324 main.go:141] libmachine: (addons-082251) DBG |       <source file='/home/jenkins/minikube-integration/21409-364627/.minikube/machines/addons-082251/boot2docker.iso'/>
	I1014 19:10:51.658947  369324 main.go:141] libmachine: (addons-082251) DBG |       <target dev='hdc' bus='scsi'/>
	I1014 19:10:51.658954  369324 main.go:141] libmachine: (addons-082251) DBG |       <readonly/>
	I1014 19:10:51.658968  369324 main.go:141] libmachine: (addons-082251) DBG |       <address type='drive' controller='0' bus='0' target='0' unit='2'/>
	I1014 19:10:51.658984  369324 main.go:141] libmachine: (addons-082251) DBG |     </disk>
	I1014 19:10:51.659023  369324 main.go:141] libmachine: (addons-082251) DBG |     <disk type='file' device='disk'>
	I1014 19:10:51.659036  369324 main.go:141] libmachine: (addons-082251) DBG |       <driver name='qemu' type='raw' io='threads'/>
	I1014 19:10:51.659052  369324 main.go:141] libmachine: (addons-082251) DBG |       <source file='/home/jenkins/minikube-integration/21409-364627/.minikube/machines/addons-082251/addons-082251.rawdisk'/>
	I1014 19:10:51.659068  369324 main.go:141] libmachine: (addons-082251) DBG |       <target dev='hda' bus='virtio'/>
	I1014 19:10:51.659083  369324 main.go:141] libmachine: (addons-082251) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
	I1014 19:10:51.659093  369324 main.go:141] libmachine: (addons-082251) DBG |     </disk>
	I1014 19:10:51.659103  369324 main.go:141] libmachine: (addons-082251) DBG |     <controller type='usb' index='0' model='piix3-uhci'>
	I1014 19:10:51.659115  369324 main.go:141] libmachine: (addons-082251) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
	I1014 19:10:51.659124  369324 main.go:141] libmachine: (addons-082251) DBG |     </controller>
	I1014 19:10:51.659130  369324 main.go:141] libmachine: (addons-082251) DBG |     <controller type='pci' index='0' model='pci-root'/>
	I1014 19:10:51.659145  369324 main.go:141] libmachine: (addons-082251) DBG |     <controller type='scsi' index='0' model='lsilogic'>
	I1014 19:10:51.659160  369324 main.go:141] libmachine: (addons-082251) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
	I1014 19:10:51.659168  369324 main.go:141] libmachine: (addons-082251) DBG |     </controller>
	I1014 19:10:51.659173  369324 main.go:141] libmachine: (addons-082251) DBG |     <interface type='network'>
	I1014 19:10:51.659182  369324 main.go:141] libmachine: (addons-082251) DBG |       <mac address='52:54:00:84:65:a3'/>
	I1014 19:10:51.659194  369324 main.go:141] libmachine: (addons-082251) DBG |       <source network='mk-addons-082251'/>
	I1014 19:10:51.659202  369324 main.go:141] libmachine: (addons-082251) DBG |       <model type='virtio'/>
	I1014 19:10:51.659214  369324 main.go:141] libmachine: (addons-082251) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
	I1014 19:10:51.659225  369324 main.go:141] libmachine: (addons-082251) DBG |     </interface>
	I1014 19:10:51.659231  369324 main.go:141] libmachine: (addons-082251) DBG |     <interface type='network'>
	I1014 19:10:51.659240  369324 main.go:141] libmachine: (addons-082251) DBG |       <mac address='52:54:00:32:f7:70'/>
	I1014 19:10:51.659251  369324 main.go:141] libmachine: (addons-082251) DBG |       <source network='default'/>
	I1014 19:10:51.659259  369324 main.go:141] libmachine: (addons-082251) DBG |       <model type='virtio'/>
	I1014 19:10:51.659271  369324 main.go:141] libmachine: (addons-082251) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
	I1014 19:10:51.659279  369324 main.go:141] libmachine: (addons-082251) DBG |     </interface>
	I1014 19:10:51.659294  369324 main.go:141] libmachine: (addons-082251) DBG |     <serial type='pty'>
	I1014 19:10:51.659321  369324 main.go:141] libmachine: (addons-082251) DBG |       <target type='isa-serial' port='0'>
	I1014 19:10:51.659350  369324 main.go:141] libmachine: (addons-082251) DBG |         <model name='isa-serial'/>
	I1014 19:10:51.659360  369324 main.go:141] libmachine: (addons-082251) DBG |       </target>
	I1014 19:10:51.659368  369324 main.go:141] libmachine: (addons-082251) DBG |     </serial>
	I1014 19:10:51.659378  369324 main.go:141] libmachine: (addons-082251) DBG |     <console type='pty'>
	I1014 19:10:51.659386  369324 main.go:141] libmachine: (addons-082251) DBG |       <target type='serial' port='0'/>
	I1014 19:10:51.659394  369324 main.go:141] libmachine: (addons-082251) DBG |     </console>
	I1014 19:10:51.659402  369324 main.go:141] libmachine: (addons-082251) DBG |     <input type='mouse' bus='ps2'/>
	I1014 19:10:51.659413  369324 main.go:141] libmachine: (addons-082251) DBG |     <input type='keyboard' bus='ps2'/>
	I1014 19:10:51.659426  369324 main.go:141] libmachine: (addons-082251) DBG |     <audio id='1' type='none'/>
	I1014 19:10:51.659436  369324 main.go:141] libmachine: (addons-082251) DBG |     <memballoon model='virtio'>
	I1014 19:10:51.659447  369324 main.go:141] libmachine: (addons-082251) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
	I1014 19:10:51.659457  369324 main.go:141] libmachine: (addons-082251) DBG |     </memballoon>
	I1014 19:10:51.659484  369324 main.go:141] libmachine: (addons-082251) DBG |     <rng model='virtio'>
	I1014 19:10:51.659504  369324 main.go:141] libmachine: (addons-082251) DBG |       <backend model='random'>/dev/random</backend>
	I1014 19:10:51.659516  369324 main.go:141] libmachine: (addons-082251) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
	I1014 19:10:51.659523  369324 main.go:141] libmachine: (addons-082251) DBG |     </rng>
	I1014 19:10:51.659541  369324 main.go:141] libmachine: (addons-082251) DBG |   </devices>
	I1014 19:10:51.659552  369324 main.go:141] libmachine: (addons-082251) DBG | </domain>
	I1014 19:10:51.659564  369324 main.go:141] libmachine: (addons-082251) DBG | 
	I1014 19:10:52.992160  369324 main.go:141] libmachine: (addons-082251) waiting for domain to start...
	I1014 19:10:52.993552  369324 main.go:141] libmachine: (addons-082251) domain is now running
	I1014 19:10:52.993578  369324 main.go:141] libmachine: (addons-082251) waiting for IP...
	I1014 19:10:52.994323  369324 main.go:141] libmachine: (addons-082251) DBG | domain addons-082251 has defined MAC address 52:54:00:84:65:a3 in network mk-addons-082251
	I1014 19:10:52.994804  369324 main.go:141] libmachine: (addons-082251) DBG | no network interface addresses found for domain addons-082251 (source=lease)
	I1014 19:10:52.994826  369324 main.go:141] libmachine: (addons-082251) DBG | trying to list again with source=arp
	I1014 19:10:52.995034  369324 main.go:141] libmachine: (addons-082251) DBG | unable to find current IP address of domain addons-082251 in network mk-addons-082251 (interfaces detected: [])
	I1014 19:10:52.995124  369324 main.go:141] libmachine: (addons-082251) DBG | I1014 19:10:52.995046  369352 retry.go:31] will retry after 240.638643ms: waiting for domain to come up
	I1014 19:10:53.237595  369324 main.go:141] libmachine: (addons-082251) DBG | domain addons-082251 has defined MAC address 52:54:00:84:65:a3 in network mk-addons-082251
	I1014 19:10:53.238106  369324 main.go:141] libmachine: (addons-082251) DBG | no network interface addresses found for domain addons-082251 (source=lease)
	I1014 19:10:53.238126  369324 main.go:141] libmachine: (addons-082251) DBG | trying to list again with source=arp
	I1014 19:10:53.238444  369324 main.go:141] libmachine: (addons-082251) DBG | unable to find current IP address of domain addons-082251 in network mk-addons-082251 (interfaces detected: [])
	I1014 19:10:53.238509  369324 main.go:141] libmachine: (addons-082251) DBG | I1014 19:10:53.238438  369352 retry.go:31] will retry after 275.833942ms: waiting for domain to come up
	I1014 19:10:53.516146  369324 main.go:141] libmachine: (addons-082251) DBG | domain addons-082251 has defined MAC address 52:54:00:84:65:a3 in network mk-addons-082251
	I1014 19:10:53.516635  369324 main.go:141] libmachine: (addons-082251) DBG | no network interface addresses found for domain addons-082251 (source=lease)
	I1014 19:10:53.516663  369324 main.go:141] libmachine: (addons-082251) DBG | trying to list again with source=arp
	I1014 19:10:53.516922  369324 main.go:141] libmachine: (addons-082251) DBG | unable to find current IP address of domain addons-082251 in network mk-addons-082251 (interfaces detected: [])
	I1014 19:10:53.517002  369324 main.go:141] libmachine: (addons-082251) DBG | I1014 19:10:53.516925  369352 retry.go:31] will retry after 375.727939ms: waiting for domain to come up
	I1014 19:10:53.894675  369324 main.go:141] libmachine: (addons-082251) DBG | domain addons-082251 has defined MAC address 52:54:00:84:65:a3 in network mk-addons-082251
	I1014 19:10:53.895208  369324 main.go:141] libmachine: (addons-082251) DBG | no network interface addresses found for domain addons-082251 (source=lease)
	I1014 19:10:53.895238  369324 main.go:141] libmachine: (addons-082251) DBG | trying to list again with source=arp
	I1014 19:10:53.895502  369324 main.go:141] libmachine: (addons-082251) DBG | unable to find current IP address of domain addons-082251 in network mk-addons-082251 (interfaces detected: [])
	I1014 19:10:53.895602  369324 main.go:141] libmachine: (addons-082251) DBG | I1014 19:10:53.895500  369352 retry.go:31] will retry after 486.490281ms: waiting for domain to come up
	I1014 19:10:54.383327  369324 main.go:141] libmachine: (addons-082251) DBG | domain addons-082251 has defined MAC address 52:54:00:84:65:a3 in network mk-addons-082251
	I1014 19:10:54.383795  369324 main.go:141] libmachine: (addons-082251) DBG | no network interface addresses found for domain addons-082251 (source=lease)
	I1014 19:10:54.383825  369324 main.go:141] libmachine: (addons-082251) DBG | trying to list again with source=arp
	I1014 19:10:54.384060  369324 main.go:141] libmachine: (addons-082251) DBG | unable to find current IP address of domain addons-082251 in network mk-addons-082251 (interfaces detected: [])
	I1014 19:10:54.384157  369324 main.go:141] libmachine: (addons-082251) DBG | I1014 19:10:54.384079  369352 retry.go:31] will retry after 760.321216ms: waiting for domain to come up
	I1014 19:10:55.146206  369324 main.go:141] libmachine: (addons-082251) DBG | domain addons-082251 has defined MAC address 52:54:00:84:65:a3 in network mk-addons-082251
	I1014 19:10:55.146646  369324 main.go:141] libmachine: (addons-082251) DBG | no network interface addresses found for domain addons-082251 (source=lease)
	I1014 19:10:55.146671  369324 main.go:141] libmachine: (addons-082251) DBG | trying to list again with source=arp
	I1014 19:10:55.146919  369324 main.go:141] libmachine: (addons-082251) DBG | unable to find current IP address of domain addons-082251 in network mk-addons-082251 (interfaces detected: [])
	I1014 19:10:55.146974  369324 main.go:141] libmachine: (addons-082251) DBG | I1014 19:10:55.146904  369352 retry.go:31] will retry after 873.461909ms: waiting for domain to come up
	I1014 19:10:56.021783  369324 main.go:141] libmachine: (addons-082251) DBG | domain addons-082251 has defined MAC address 52:54:00:84:65:a3 in network mk-addons-082251
	I1014 19:10:56.022334  369324 main.go:141] libmachine: (addons-082251) DBG | no network interface addresses found for domain addons-082251 (source=lease)
	I1014 19:10:56.022363  369324 main.go:141] libmachine: (addons-082251) DBG | trying to list again with source=arp
	I1014 19:10:56.022665  369324 main.go:141] libmachine: (addons-082251) DBG | unable to find current IP address of domain addons-082251 in network mk-addons-082251 (interfaces detected: [])
	I1014 19:10:56.022736  369324 main.go:141] libmachine: (addons-082251) DBG | I1014 19:10:56.022658  369352 retry.go:31] will retry after 973.593368ms: waiting for domain to come up
	I1014 19:10:56.998063  369324 main.go:141] libmachine: (addons-082251) DBG | domain addons-082251 has defined MAC address 52:54:00:84:65:a3 in network mk-addons-082251
	I1014 19:10:56.998449  369324 main.go:141] libmachine: (addons-082251) DBG | no network interface addresses found for domain addons-082251 (source=lease)
	I1014 19:10:56.998474  369324 main.go:141] libmachine: (addons-082251) DBG | trying to list again with source=arp
	I1014 19:10:56.998765  369324 main.go:141] libmachine: (addons-082251) DBG | unable to find current IP address of domain addons-082251 in network mk-addons-082251 (interfaces detected: [])
	I1014 19:10:56.998790  369324 main.go:141] libmachine: (addons-082251) DBG | I1014 19:10:56.998741  369352 retry.go:31] will retry after 1.025457753s: waiting for domain to come up
	I1014 19:10:58.026100  369324 main.go:141] libmachine: (addons-082251) DBG | domain addons-082251 has defined MAC address 52:54:00:84:65:a3 in network mk-addons-082251
	I1014 19:10:58.026678  369324 main.go:141] libmachine: (addons-082251) DBG | no network interface addresses found for domain addons-082251 (source=lease)
	I1014 19:10:58.026708  369324 main.go:141] libmachine: (addons-082251) DBG | trying to list again with source=arp
	I1014 19:10:58.026953  369324 main.go:141] libmachine: (addons-082251) DBG | unable to find current IP address of domain addons-082251 in network mk-addons-082251 (interfaces detected: [])
	I1014 19:10:58.027016  369324 main.go:141] libmachine: (addons-082251) DBG | I1014 19:10:58.026937  369352 retry.go:31] will retry after 1.566940104s: waiting for domain to come up
	I1014 19:10:59.595839  369324 main.go:141] libmachine: (addons-082251) DBG | domain addons-082251 has defined MAC address 52:54:00:84:65:a3 in network mk-addons-082251
	I1014 19:10:59.596299  369324 main.go:141] libmachine: (addons-082251) DBG | no network interface addresses found for domain addons-082251 (source=lease)
	I1014 19:10:59.596339  369324 main.go:141] libmachine: (addons-082251) DBG | trying to list again with source=arp
	I1014 19:10:59.596620  369324 main.go:141] libmachine: (addons-082251) DBG | unable to find current IP address of domain addons-082251 in network mk-addons-082251 (interfaces detected: [])
	I1014 19:10:59.596640  369324 main.go:141] libmachine: (addons-082251) DBG | I1014 19:10:59.596594  369352 retry.go:31] will retry after 2.180634997s: waiting for domain to come up
	I1014 19:11:01.779721  369324 main.go:141] libmachine: (addons-082251) DBG | domain addons-082251 has defined MAC address 52:54:00:84:65:a3 in network mk-addons-082251
	I1014 19:11:01.780394  369324 main.go:141] libmachine: (addons-082251) DBG | no network interface addresses found for domain addons-082251 (source=lease)
	I1014 19:11:01.780427  369324 main.go:141] libmachine: (addons-082251) DBG | trying to list again with source=arp
	I1014 19:11:01.780715  369324 main.go:141] libmachine: (addons-082251) DBG | unable to find current IP address of domain addons-082251 in network mk-addons-082251 (interfaces detected: [])
	I1014 19:11:01.780756  369324 main.go:141] libmachine: (addons-082251) DBG | I1014 19:11:01.780708  369352 retry.go:31] will retry after 2.199316358s: waiting for domain to come up
	I1014 19:11:03.983774  369324 main.go:141] libmachine: (addons-082251) DBG | domain addons-082251 has defined MAC address 52:54:00:84:65:a3 in network mk-addons-082251
	I1014 19:11:03.984284  369324 main.go:141] libmachine: (addons-082251) DBG | no network interface addresses found for domain addons-082251 (source=lease)
	I1014 19:11:03.984303  369324 main.go:141] libmachine: (addons-082251) DBG | trying to list again with source=arp
	I1014 19:11:03.984761  369324 main.go:141] libmachine: (addons-082251) DBG | unable to find current IP address of domain addons-082251 in network mk-addons-082251 (interfaces detected: [])
	I1014 19:11:03.984791  369324 main.go:141] libmachine: (addons-082251) DBG | I1014 19:11:03.984736  369352 retry.go:31] will retry after 2.740233363s: waiting for domain to come up
	I1014 19:11:06.726530  369324 main.go:141] libmachine: (addons-082251) DBG | domain addons-082251 has defined MAC address 52:54:00:84:65:a3 in network mk-addons-082251
	I1014 19:11:06.727041  369324 main.go:141] libmachine: (addons-082251) DBG | domain addons-082251 has current primary IP address 192.168.39.214 and MAC address 52:54:00:84:65:a3 in network mk-addons-082251
	I1014 19:11:06.727070  369324 main.go:141] libmachine: (addons-082251) found domain IP: 192.168.39.214
	I1014 19:11:06.727084  369324 main.go:141] libmachine: (addons-082251) reserving static IP address...
	I1014 19:11:06.727617  369324 main.go:141] libmachine: (addons-082251) DBG | unable to find host DHCP lease matching {name: "addons-082251", mac: "52:54:00:84:65:a3", ip: "192.168.39.214"} in network mk-addons-082251
	I1014 19:11:06.932864  369324 main.go:141] libmachine: (addons-082251) DBG | Getting to WaitForSSH function...
	I1014 19:11:06.932904  369324 main.go:141] libmachine: (addons-082251) reserved static IP address 192.168.39.214 for domain addons-082251
	I1014 19:11:06.932919  369324 main.go:141] libmachine: (addons-082251) waiting for SSH...
	I1014 19:11:06.936113  369324 main.go:141] libmachine: (addons-082251) DBG | domain addons-082251 has defined MAC address 52:54:00:84:65:a3 in network mk-addons-082251
	I1014 19:11:06.936590  369324 main.go:141] libmachine: (addons-082251) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:65:a3", ip: ""} in network mk-addons-082251: {Iface:virbr1 ExpiryTime:2025-10-14 20:11:06 +0000 UTC Type:0 Mac:52:54:00:84:65:a3 Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:minikube Clientid:01:52:54:00:84:65:a3}
	I1014 19:11:06.936620  369324 main.go:141] libmachine: (addons-082251) DBG | domain addons-082251 has defined IP address 192.168.39.214 and MAC address 52:54:00:84:65:a3 in network mk-addons-082251
	I1014 19:11:06.936760  369324 main.go:141] libmachine: (addons-082251) DBG | Using SSH client type: external
	I1014 19:11:06.936839  369324 main.go:141] libmachine: (addons-082251) DBG | Using SSH private key: /home/jenkins/minikube-integration/21409-364627/.minikube/machines/addons-082251/id_rsa (-rw-------)
	I1014 19:11:06.936877  369324 main.go:141] libmachine: (addons-082251) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.214 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/21409-364627/.minikube/machines/addons-082251/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1014 19:11:06.936895  369324 main.go:141] libmachine: (addons-082251) DBG | About to run SSH command:
	I1014 19:11:06.936908  369324 main.go:141] libmachine: (addons-082251) DBG | exit 0
	I1014 19:11:07.071394  369324 main.go:141] libmachine: (addons-082251) DBG | SSH cmd err, output: <nil>: 
	I1014 19:11:07.071723  369324 main.go:141] libmachine: (addons-082251) domain creation complete
	I1014 19:11:07.072069  369324 main.go:141] libmachine: (addons-082251) Calling .GetConfigRaw
	I1014 19:11:07.072679  369324 main.go:141] libmachine: (addons-082251) Calling .DriverName
	I1014 19:11:07.072881  369324 main.go:141] libmachine: (addons-082251) Calling .DriverName
	I1014 19:11:07.073056  369324 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1014 19:11:07.073068  369324 main.go:141] libmachine: (addons-082251) Calling .GetState
	I1014 19:11:07.074731  369324 main.go:141] libmachine: Detecting operating system of created instance...
	I1014 19:11:07.074749  369324 main.go:141] libmachine: Waiting for SSH to be available...
	I1014 19:11:07.074756  369324 main.go:141] libmachine: Getting to WaitForSSH function...
	I1014 19:11:07.074764  369324 main.go:141] libmachine: (addons-082251) Calling .GetSSHHostname
	I1014 19:11:07.077541  369324 main.go:141] libmachine: (addons-082251) DBG | domain addons-082251 has defined MAC address 52:54:00:84:65:a3 in network mk-addons-082251
	I1014 19:11:07.077951  369324 main.go:141] libmachine: (addons-082251) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:65:a3", ip: ""} in network mk-addons-082251: {Iface:virbr1 ExpiryTime:2025-10-14 20:11:06 +0000 UTC Type:0 Mac:52:54:00:84:65:a3 Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:addons-082251 Clientid:01:52:54:00:84:65:a3}
	I1014 19:11:07.077977  369324 main.go:141] libmachine: (addons-082251) DBG | domain addons-082251 has defined IP address 192.168.39.214 and MAC address 52:54:00:84:65:a3 in network mk-addons-082251
	I1014 19:11:07.078125  369324 main.go:141] libmachine: (addons-082251) Calling .GetSSHPort
	I1014 19:11:07.078367  369324 main.go:141] libmachine: (addons-082251) Calling .GetSSHKeyPath
	I1014 19:11:07.078564  369324 main.go:141] libmachine: (addons-082251) Calling .GetSSHKeyPath
	I1014 19:11:07.078700  369324 main.go:141] libmachine: (addons-082251) Calling .GetSSHUsername
	I1014 19:11:07.078869  369324 main.go:141] libmachine: Using SSH client type: native
	I1014 19:11:07.079158  369324 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.214 22 <nil> <nil>}
	I1014 19:11:07.079173  369324 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1014 19:11:07.190984  369324 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1014 19:11:07.191015  369324 main.go:141] libmachine: Detecting the provisioner...
	I1014 19:11:07.191026  369324 main.go:141] libmachine: (addons-082251) Calling .GetSSHHostname
	I1014 19:11:07.194391  369324 main.go:141] libmachine: (addons-082251) DBG | domain addons-082251 has defined MAC address 52:54:00:84:65:a3 in network mk-addons-082251
	I1014 19:11:07.194825  369324 main.go:141] libmachine: (addons-082251) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:65:a3", ip: ""} in network mk-addons-082251: {Iface:virbr1 ExpiryTime:2025-10-14 20:11:06 +0000 UTC Type:0 Mac:52:54:00:84:65:a3 Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:addons-082251 Clientid:01:52:54:00:84:65:a3}
	I1014 19:11:07.194855  369324 main.go:141] libmachine: (addons-082251) DBG | domain addons-082251 has defined IP address 192.168.39.214 and MAC address 52:54:00:84:65:a3 in network mk-addons-082251
	I1014 19:11:07.195024  369324 main.go:141] libmachine: (addons-082251) Calling .GetSSHPort
	I1014 19:11:07.195234  369324 main.go:141] libmachine: (addons-082251) Calling .GetSSHKeyPath
	I1014 19:11:07.195436  369324 main.go:141] libmachine: (addons-082251) Calling .GetSSHKeyPath
	I1014 19:11:07.195599  369324 main.go:141] libmachine: (addons-082251) Calling .GetSSHUsername
	I1014 19:11:07.195745  369324 main.go:141] libmachine: Using SSH client type: native
	I1014 19:11:07.195975  369324 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.214 22 <nil> <nil>}
	I1014 19:11:07.195989  369324 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1014 19:11:07.311015  369324 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2025.02-dirty
	ID=buildroot
	VERSION_ID=2025.02
	PRETTY_NAME="Buildroot 2025.02"
	
	I1014 19:11:07.311094  369324 main.go:141] libmachine: found compatible host: buildroot
	I1014 19:11:07.311102  369324 main.go:141] libmachine: Provisioning with buildroot...
	I1014 19:11:07.311109  369324 main.go:141] libmachine: (addons-082251) Calling .GetMachineName
	I1014 19:11:07.311379  369324 buildroot.go:166] provisioning hostname "addons-082251"
	I1014 19:11:07.311406  369324 main.go:141] libmachine: (addons-082251) Calling .GetMachineName
	I1014 19:11:07.311639  369324 main.go:141] libmachine: (addons-082251) Calling .GetSSHHostname
	I1014 19:11:07.314749  369324 main.go:141] libmachine: (addons-082251) DBG | domain addons-082251 has defined MAC address 52:54:00:84:65:a3 in network mk-addons-082251
	I1014 19:11:07.315169  369324 main.go:141] libmachine: (addons-082251) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:65:a3", ip: ""} in network mk-addons-082251: {Iface:virbr1 ExpiryTime:2025-10-14 20:11:06 +0000 UTC Type:0 Mac:52:54:00:84:65:a3 Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:addons-082251 Clientid:01:52:54:00:84:65:a3}
	I1014 19:11:07.315202  369324 main.go:141] libmachine: (addons-082251) DBG | domain addons-082251 has defined IP address 192.168.39.214 and MAC address 52:54:00:84:65:a3 in network mk-addons-082251
	I1014 19:11:07.315445  369324 main.go:141] libmachine: (addons-082251) Calling .GetSSHPort
	I1014 19:11:07.315689  369324 main.go:141] libmachine: (addons-082251) Calling .GetSSHKeyPath
	I1014 19:11:07.315849  369324 main.go:141] libmachine: (addons-082251) Calling .GetSSHKeyPath
	I1014 19:11:07.315993  369324 main.go:141] libmachine: (addons-082251) Calling .GetSSHUsername
	I1014 19:11:07.316175  369324 main.go:141] libmachine: Using SSH client type: native
	I1014 19:11:07.316408  369324 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.214 22 <nil> <nil>}
	I1014 19:11:07.316422  369324 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-082251 && echo "addons-082251" | sudo tee /etc/hostname
	I1014 19:11:07.470426  369324 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-082251
	
	I1014 19:11:07.470458  369324 main.go:141] libmachine: (addons-082251) Calling .GetSSHHostname
	I1014 19:11:07.474130  369324 main.go:141] libmachine: (addons-082251) DBG | domain addons-082251 has defined MAC address 52:54:00:84:65:a3 in network mk-addons-082251
	I1014 19:11:07.474669  369324 main.go:141] libmachine: (addons-082251) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:65:a3", ip: ""} in network mk-addons-082251: {Iface:virbr1 ExpiryTime:2025-10-14 20:11:06 +0000 UTC Type:0 Mac:52:54:00:84:65:a3 Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:addons-082251 Clientid:01:52:54:00:84:65:a3}
	I1014 19:11:07.474703  369324 main.go:141] libmachine: (addons-082251) DBG | domain addons-082251 has defined IP address 192.168.39.214 and MAC address 52:54:00:84:65:a3 in network mk-addons-082251
	I1014 19:11:07.474985  369324 main.go:141] libmachine: (addons-082251) Calling .GetSSHPort
	I1014 19:11:07.475208  369324 main.go:141] libmachine: (addons-082251) Calling .GetSSHKeyPath
	I1014 19:11:07.475379  369324 main.go:141] libmachine: (addons-082251) Calling .GetSSHKeyPath
	I1014 19:11:07.475593  369324 main.go:141] libmachine: (addons-082251) Calling .GetSSHUsername
	I1014 19:11:07.475796  369324 main.go:141] libmachine: Using SSH client type: native
	I1014 19:11:07.476004  369324 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.214 22 <nil> <nil>}
	I1014 19:11:07.476029  369324 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-082251' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-082251/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-082251' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1014 19:11:07.598502  369324 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1014 19:11:07.598538  369324 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21409-364627/.minikube CaCertPath:/home/jenkins/minikube-integration/21409-364627/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21409-364627/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21409-364627/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21409-364627/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21409-364627/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21409-364627/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21409-364627/.minikube}
	I1014 19:11:07.598597  369324 buildroot.go:174] setting up certificates
	I1014 19:11:07.598610  369324 provision.go:84] configureAuth start
	I1014 19:11:07.598624  369324 main.go:141] libmachine: (addons-082251) Calling .GetMachineName
	I1014 19:11:07.598973  369324 main.go:141] libmachine: (addons-082251) Calling .GetIP
	I1014 19:11:07.602301  369324 main.go:141] libmachine: (addons-082251) DBG | domain addons-082251 has defined MAC address 52:54:00:84:65:a3 in network mk-addons-082251
	I1014 19:11:07.602701  369324 main.go:141] libmachine: (addons-082251) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:65:a3", ip: ""} in network mk-addons-082251: {Iface:virbr1 ExpiryTime:2025-10-14 20:11:06 +0000 UTC Type:0 Mac:52:54:00:84:65:a3 Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:addons-082251 Clientid:01:52:54:00:84:65:a3}
	I1014 19:11:07.602732  369324 main.go:141] libmachine: (addons-082251) DBG | domain addons-082251 has defined IP address 192.168.39.214 and MAC address 52:54:00:84:65:a3 in network mk-addons-082251
	I1014 19:11:07.602935  369324 main.go:141] libmachine: (addons-082251) Calling .GetSSHHostname
	I1014 19:11:07.605394  369324 main.go:141] libmachine: (addons-082251) DBG | domain addons-082251 has defined MAC address 52:54:00:84:65:a3 in network mk-addons-082251
	I1014 19:11:07.605720  369324 main.go:141] libmachine: (addons-082251) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:65:a3", ip: ""} in network mk-addons-082251: {Iface:virbr1 ExpiryTime:2025-10-14 20:11:06 +0000 UTC Type:0 Mac:52:54:00:84:65:a3 Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:addons-082251 Clientid:01:52:54:00:84:65:a3}
	I1014 19:11:07.605751  369324 main.go:141] libmachine: (addons-082251) DBG | domain addons-082251 has defined IP address 192.168.39.214 and MAC address 52:54:00:84:65:a3 in network mk-addons-082251
	I1014 19:11:07.605877  369324 provision.go:143] copyHostCerts
	I1014 19:11:07.605958  369324 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-364627/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21409-364627/.minikube/key.pem (1675 bytes)
	I1014 19:11:07.606099  369324 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-364627/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21409-364627/.minikube/ca.pem (1082 bytes)
	I1014 19:11:07.606184  369324 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-364627/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21409-364627/.minikube/cert.pem (1123 bytes)
	I1014 19:11:07.606251  369324 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21409-364627/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21409-364627/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21409-364627/.minikube/certs/ca-key.pem org=jenkins.addons-082251 san=[127.0.0.1 192.168.39.214 addons-082251 localhost minikube]
	I1014 19:11:08.045381  369324 provision.go:177] copyRemoteCerts
	I1014 19:11:08.045457  369324 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1014 19:11:08.045488  369324 main.go:141] libmachine: (addons-082251) Calling .GetSSHHostname
	I1014 19:11:08.048781  369324 main.go:141] libmachine: (addons-082251) DBG | domain addons-082251 has defined MAC address 52:54:00:84:65:a3 in network mk-addons-082251
	I1014 19:11:08.049250  369324 main.go:141] libmachine: (addons-082251) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:65:a3", ip: ""} in network mk-addons-082251: {Iface:virbr1 ExpiryTime:2025-10-14 20:11:06 +0000 UTC Type:0 Mac:52:54:00:84:65:a3 Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:addons-082251 Clientid:01:52:54:00:84:65:a3}
	I1014 19:11:08.049276  369324 main.go:141] libmachine: (addons-082251) DBG | domain addons-082251 has defined IP address 192.168.39.214 and MAC address 52:54:00:84:65:a3 in network mk-addons-082251
	I1014 19:11:08.049592  369324 main.go:141] libmachine: (addons-082251) Calling .GetSSHPort
	I1014 19:11:08.049826  369324 main.go:141] libmachine: (addons-082251) Calling .GetSSHKeyPath
	I1014 19:11:08.050015  369324 main.go:141] libmachine: (addons-082251) Calling .GetSSHUsername
	I1014 19:11:08.050167  369324 sshutil.go:53] new ssh client: &{IP:192.168.39.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21409-364627/.minikube/machines/addons-082251/id_rsa Username:docker}
	I1014 19:11:08.137905  369324 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-364627/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1014 19:11:08.168501  369324 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-364627/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1014 19:11:08.199499  369324 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-364627/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1014 19:11:08.231063  369324 provision.go:87] duration metric: took 632.434034ms to configureAuth
	I1014 19:11:08.231099  369324 buildroot.go:189] setting minikube options for container-runtime
	I1014 19:11:08.231294  369324 config.go:182] Loaded profile config "addons-082251": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1014 19:11:08.231392  369324 main.go:141] libmachine: (addons-082251) Calling .GetSSHHostname
	I1014 19:11:08.234771  369324 main.go:141] libmachine: (addons-082251) DBG | domain addons-082251 has defined MAC address 52:54:00:84:65:a3 in network mk-addons-082251
	I1014 19:11:08.235112  369324 main.go:141] libmachine: (addons-082251) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:65:a3", ip: ""} in network mk-addons-082251: {Iface:virbr1 ExpiryTime:2025-10-14 20:11:06 +0000 UTC Type:0 Mac:52:54:00:84:65:a3 Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:addons-082251 Clientid:01:52:54:00:84:65:a3}
	I1014 19:11:08.235163  369324 main.go:141] libmachine: (addons-082251) DBG | domain addons-082251 has defined IP address 192.168.39.214 and MAC address 52:54:00:84:65:a3 in network mk-addons-082251
	I1014 19:11:08.235304  369324 main.go:141] libmachine: (addons-082251) Calling .GetSSHPort
	I1014 19:11:08.235586  369324 main.go:141] libmachine: (addons-082251) Calling .GetSSHKeyPath
	I1014 19:11:08.235765  369324 main.go:141] libmachine: (addons-082251) Calling .GetSSHKeyPath
	I1014 19:11:08.235912  369324 main.go:141] libmachine: (addons-082251) Calling .GetSSHUsername
	I1014 19:11:08.236086  369324 main.go:141] libmachine: Using SSH client type: native
	I1014 19:11:08.236366  369324 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.214 22 <nil> <nil>}
	I1014 19:11:08.236389  369324 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1014 19:11:08.497073  369324 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1014 19:11:08.497109  369324 main.go:141] libmachine: Checking connection to Docker...
	I1014 19:11:08.497119  369324 main.go:141] libmachine: (addons-082251) Calling .GetURL
	I1014 19:11:08.498608  369324 main.go:141] libmachine: (addons-082251) DBG | using libvirt version 8000000
	I1014 19:11:08.501511  369324 main.go:141] libmachine: (addons-082251) DBG | domain addons-082251 has defined MAC address 52:54:00:84:65:a3 in network mk-addons-082251
	I1014 19:11:08.502091  369324 main.go:141] libmachine: (addons-082251) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:65:a3", ip: ""} in network mk-addons-082251: {Iface:virbr1 ExpiryTime:2025-10-14 20:11:06 +0000 UTC Type:0 Mac:52:54:00:84:65:a3 Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:addons-082251 Clientid:01:52:54:00:84:65:a3}
	I1014 19:11:08.502130  369324 main.go:141] libmachine: (addons-082251) DBG | domain addons-082251 has defined IP address 192.168.39.214 and MAC address 52:54:00:84:65:a3 in network mk-addons-082251
	I1014 19:11:08.502307  369324 main.go:141] libmachine: Docker is up and running!
	I1014 19:11:08.502352  369324 main.go:141] libmachine: Reticulating splines...
	I1014 19:11:08.502361  369324 client.go:171] duration metric: took 17.863885082s to LocalClient.Create
	I1014 19:11:08.502392  369324 start.go:167] duration metric: took 17.863981293s to libmachine.API.Create "addons-082251"
	I1014 19:11:08.502402  369324 start.go:293] postStartSetup for "addons-082251" (driver="kvm2")
	I1014 19:11:08.502412  369324 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1014 19:11:08.502432  369324 main.go:141] libmachine: (addons-082251) Calling .DriverName
	I1014 19:11:08.502745  369324 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1014 19:11:08.502788  369324 main.go:141] libmachine: (addons-082251) Calling .GetSSHHostname
	I1014 19:11:08.505151  369324 main.go:141] libmachine: (addons-082251) DBG | domain addons-082251 has defined MAC address 52:54:00:84:65:a3 in network mk-addons-082251
	I1014 19:11:08.505528  369324 main.go:141] libmachine: (addons-082251) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:65:a3", ip: ""} in network mk-addons-082251: {Iface:virbr1 ExpiryTime:2025-10-14 20:11:06 +0000 UTC Type:0 Mac:52:54:00:84:65:a3 Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:addons-082251 Clientid:01:52:54:00:84:65:a3}
	I1014 19:11:08.505562  369324 main.go:141] libmachine: (addons-082251) DBG | domain addons-082251 has defined IP address 192.168.39.214 and MAC address 52:54:00:84:65:a3 in network mk-addons-082251
	I1014 19:11:08.505707  369324 main.go:141] libmachine: (addons-082251) Calling .GetSSHPort
	I1014 19:11:08.505931  369324 main.go:141] libmachine: (addons-082251) Calling .GetSSHKeyPath
	I1014 19:11:08.506108  369324 main.go:141] libmachine: (addons-082251) Calling .GetSSHUsername
	I1014 19:11:08.506231  369324 sshutil.go:53] new ssh client: &{IP:192.168.39.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21409-364627/.minikube/machines/addons-082251/id_rsa Username:docker}
	I1014 19:11:08.599168  369324 ssh_runner.go:195] Run: cat /etc/os-release
	I1014 19:11:08.604567  369324 info.go:137] Remote host: Buildroot 2025.02
	I1014 19:11:08.604600  369324 filesync.go:126] Scanning /home/jenkins/minikube-integration/21409-364627/.minikube/addons for local assets ...
	I1014 19:11:08.604689  369324 filesync.go:126] Scanning /home/jenkins/minikube-integration/21409-364627/.minikube/files for local assets ...
	I1014 19:11:08.604723  369324 start.go:296] duration metric: took 102.314635ms for postStartSetup
	I1014 19:11:08.604770  369324 main.go:141] libmachine: (addons-082251) Calling .GetConfigRaw
	I1014 19:11:08.605383  369324 main.go:141] libmachine: (addons-082251) Calling .GetIP
	I1014 19:11:08.608698  369324 main.go:141] libmachine: (addons-082251) DBG | domain addons-082251 has defined MAC address 52:54:00:84:65:a3 in network mk-addons-082251
	I1014 19:11:08.609102  369324 main.go:141] libmachine: (addons-082251) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:65:a3", ip: ""} in network mk-addons-082251: {Iface:virbr1 ExpiryTime:2025-10-14 20:11:06 +0000 UTC Type:0 Mac:52:54:00:84:65:a3 Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:addons-082251 Clientid:01:52:54:00:84:65:a3}
	I1014 19:11:08.609135  369324 main.go:141] libmachine: (addons-082251) DBG | domain addons-082251 has defined IP address 192.168.39.214 and MAC address 52:54:00:84:65:a3 in network mk-addons-082251
	I1014 19:11:08.609508  369324 profile.go:143] Saving config to /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/addons-082251/config.json ...
	I1014 19:11:08.609727  369324 start.go:128] duration metric: took 17.989600353s to createHost
	I1014 19:11:08.609753  369324 main.go:141] libmachine: (addons-082251) Calling .GetSSHHostname
	I1014 19:11:08.612065  369324 main.go:141] libmachine: (addons-082251) DBG | domain addons-082251 has defined MAC address 52:54:00:84:65:a3 in network mk-addons-082251
	I1014 19:11:08.612534  369324 main.go:141] libmachine: (addons-082251) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:65:a3", ip: ""} in network mk-addons-082251: {Iface:virbr1 ExpiryTime:2025-10-14 20:11:06 +0000 UTC Type:0 Mac:52:54:00:84:65:a3 Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:addons-082251 Clientid:01:52:54:00:84:65:a3}
	I1014 19:11:08.612560  369324 main.go:141] libmachine: (addons-082251) DBG | domain addons-082251 has defined IP address 192.168.39.214 and MAC address 52:54:00:84:65:a3 in network mk-addons-082251
	I1014 19:11:08.612720  369324 main.go:141] libmachine: (addons-082251) Calling .GetSSHPort
	I1014 19:11:08.612953  369324 main.go:141] libmachine: (addons-082251) Calling .GetSSHKeyPath
	I1014 19:11:08.613114  369324 main.go:141] libmachine: (addons-082251) Calling .GetSSHKeyPath
	I1014 19:11:08.613279  369324 main.go:141] libmachine: (addons-082251) Calling .GetSSHUsername
	I1014 19:11:08.613525  369324 main.go:141] libmachine: Using SSH client type: native
	I1014 19:11:08.613730  369324 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.214 22 <nil> <nil>}
	I1014 19:11:08.613741  369324 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1014 19:11:08.724977  369324 main.go:141] libmachine: SSH cmd err, output: <nil>: 1760469068.682866340
	
	I1014 19:11:08.725002  369324 fix.go:216] guest clock: 1760469068.682866340
	I1014 19:11:08.725010  369324 fix.go:229] Guest: 2025-10-14 19:11:08.68286634 +0000 UTC Remote: 2025-10-14 19:11:08.609739958 +0000 UTC m=+18.108626375 (delta=73.126382ms)
	I1014 19:11:08.725032  369324 fix.go:200] guest clock delta is within tolerance: 73.126382ms
	I1014 19:11:08.725037  369324 start.go:83] releasing machines lock for "addons-082251", held for 18.105009215s
	I1014 19:11:08.725057  369324 main.go:141] libmachine: (addons-082251) Calling .DriverName
	I1014 19:11:08.725353  369324 main.go:141] libmachine: (addons-082251) Calling .GetIP
	I1014 19:11:08.728817  369324 main.go:141] libmachine: (addons-082251) DBG | domain addons-082251 has defined MAC address 52:54:00:84:65:a3 in network mk-addons-082251
	I1014 19:11:08.729241  369324 main.go:141] libmachine: (addons-082251) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:65:a3", ip: ""} in network mk-addons-082251: {Iface:virbr1 ExpiryTime:2025-10-14 20:11:06 +0000 UTC Type:0 Mac:52:54:00:84:65:a3 Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:addons-082251 Clientid:01:52:54:00:84:65:a3}
	I1014 19:11:08.729266  369324 main.go:141] libmachine: (addons-082251) DBG | domain addons-082251 has defined IP address 192.168.39.214 and MAC address 52:54:00:84:65:a3 in network mk-addons-082251
	I1014 19:11:08.729459  369324 main.go:141] libmachine: (addons-082251) Calling .DriverName
	I1014 19:11:08.730186  369324 main.go:141] libmachine: (addons-082251) Calling .DriverName
	I1014 19:11:08.730449  369324 main.go:141] libmachine: (addons-082251) Calling .DriverName
	I1014 19:11:08.730586  369324 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1014 19:11:08.730633  369324 main.go:141] libmachine: (addons-082251) Calling .GetSSHHostname
	I1014 19:11:08.730698  369324 ssh_runner.go:195] Run: cat /version.json
	I1014 19:11:08.730726  369324 main.go:141] libmachine: (addons-082251) Calling .GetSSHHostname
	I1014 19:11:08.733970  369324 main.go:141] libmachine: (addons-082251) DBG | domain addons-082251 has defined MAC address 52:54:00:84:65:a3 in network mk-addons-082251
	I1014 19:11:08.734228  369324 main.go:141] libmachine: (addons-082251) DBG | domain addons-082251 has defined MAC address 52:54:00:84:65:a3 in network mk-addons-082251
	I1014 19:11:08.734515  369324 main.go:141] libmachine: (addons-082251) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:65:a3", ip: ""} in network mk-addons-082251: {Iface:virbr1 ExpiryTime:2025-10-14 20:11:06 +0000 UTC Type:0 Mac:52:54:00:84:65:a3 Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:addons-082251 Clientid:01:52:54:00:84:65:a3}
	I1014 19:11:08.734542  369324 main.go:141] libmachine: (addons-082251) DBG | domain addons-082251 has defined IP address 192.168.39.214 and MAC address 52:54:00:84:65:a3 in network mk-addons-082251
	I1014 19:11:08.734732  369324 main.go:141] libmachine: (addons-082251) Calling .GetSSHPort
	I1014 19:11:08.734884  369324 main.go:141] libmachine: (addons-082251) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:65:a3", ip: ""} in network mk-addons-082251: {Iface:virbr1 ExpiryTime:2025-10-14 20:11:06 +0000 UTC Type:0 Mac:52:54:00:84:65:a3 Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:addons-082251 Clientid:01:52:54:00:84:65:a3}
	I1014 19:11:08.734916  369324 main.go:141] libmachine: (addons-082251) DBG | domain addons-082251 has defined IP address 192.168.39.214 and MAC address 52:54:00:84:65:a3 in network mk-addons-082251
	I1014 19:11:08.734985  369324 main.go:141] libmachine: (addons-082251) Calling .GetSSHKeyPath
	I1014 19:11:08.735129  369324 main.go:141] libmachine: (addons-082251) Calling .GetSSHPort
	I1014 19:11:08.735217  369324 main.go:141] libmachine: (addons-082251) Calling .GetSSHUsername
	I1014 19:11:08.735303  369324 main.go:141] libmachine: (addons-082251) Calling .GetSSHKeyPath
	I1014 19:11:08.735402  369324 sshutil.go:53] new ssh client: &{IP:192.168.39.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21409-364627/.minikube/machines/addons-082251/id_rsa Username:docker}
	I1014 19:11:08.735463  369324 main.go:141] libmachine: (addons-082251) Calling .GetSSHUsername
	I1014 19:11:08.735596  369324 sshutil.go:53] new ssh client: &{IP:192.168.39.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21409-364627/.minikube/machines/addons-082251/id_rsa Username:docker}
	I1014 19:11:08.850247  369324 ssh_runner.go:195] Run: systemctl --version
	I1014 19:11:08.856660  369324 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1014 19:11:09.015137  369324 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1014 19:11:09.022602  369324 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1014 19:11:09.022673  369324 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1014 19:11:09.043622  369324 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1014 19:11:09.043653  369324 start.go:495] detecting cgroup driver to use...
	I1014 19:11:09.043731  369324 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1014 19:11:09.066337  369324 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1014 19:11:09.086565  369324 docker.go:218] disabling cri-docker service (if available) ...
	I1014 19:11:09.086652  369324 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1014 19:11:09.105197  369324 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1014 19:11:09.122460  369324 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1014 19:11:09.273934  369324 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1014 19:11:09.486851  369324 docker.go:234] disabling docker service ...
	I1014 19:11:09.486927  369324 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1014 19:11:09.502971  369324 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1014 19:11:09.518285  369324 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1014 19:11:09.676378  369324 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1014 19:11:09.821298  369324 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1014 19:11:09.841545  369324 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1014 19:11:09.865612  369324 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1014 19:11:09.865674  369324 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 19:11:09.878419  369324 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1014 19:11:09.878495  369324 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 19:11:09.891105  369324 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 19:11:09.903597  369324 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 19:11:09.916418  369324 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1014 19:11:09.929383  369324 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 19:11:09.942489  369324 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 19:11:09.964458  369324 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 19:11:09.977320  369324 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1014 19:11:09.988555  369324 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1014 19:11:09.988624  369324 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1014 19:11:10.008616  369324 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1014 19:11:10.020769  369324 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 19:11:10.167971  369324 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1014 19:11:10.278045  369324 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1014 19:11:10.278156  369324 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1014 19:11:10.283639  369324 start.go:563] Will wait 60s for crictl version
	I1014 19:11:10.283715  369324 ssh_runner.go:195] Run: which crictl
	I1014 19:11:10.287828  369324 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1014 19:11:10.330726  369324 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1014 19:11:10.330823  369324 ssh_runner.go:195] Run: crio --version
	I1014 19:11:10.361269  369324 ssh_runner.go:195] Run: crio --version
	I1014 19:11:10.398378  369324 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.29.1 ...
	I1014 19:11:10.399432  369324 main.go:141] libmachine: (addons-082251) Calling .GetIP
	I1014 19:11:10.402170  369324 main.go:141] libmachine: (addons-082251) DBG | domain addons-082251 has defined MAC address 52:54:00:84:65:a3 in network mk-addons-082251
	I1014 19:11:10.402557  369324 main.go:141] libmachine: (addons-082251) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:65:a3", ip: ""} in network mk-addons-082251: {Iface:virbr1 ExpiryTime:2025-10-14 20:11:06 +0000 UTC Type:0 Mac:52:54:00:84:65:a3 Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:addons-082251 Clientid:01:52:54:00:84:65:a3}
	I1014 19:11:10.402590  369324 main.go:141] libmachine: (addons-082251) DBG | domain addons-082251 has defined IP address 192.168.39.214 and MAC address 52:54:00:84:65:a3 in network mk-addons-082251
	I1014 19:11:10.402902  369324 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1014 19:11:10.407560  369324 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1014 19:11:10.423013  369324 kubeadm.go:883] updating cluster {Name:addons-082251 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.
1 ClusterName:addons-082251 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.214 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Dis
ableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1014 19:11:10.423140  369324 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1014 19:11:10.423609  369324 ssh_runner.go:195] Run: sudo crictl images --output json
	I1014 19:11:10.459375  369324 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.34.1". assuming images are not preloaded.
	I1014 19:11:10.459464  369324 ssh_runner.go:195] Run: which lz4
	I1014 19:11:10.464108  369324 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1014 19:11:10.469113  369324 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1014 19:11:10.469171  369324 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-364627/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (409477533 bytes)
	I1014 19:11:11.839671  369324 crio.go:462] duration metric: took 1.37561762s to copy over tarball
	I1014 19:11:11.839754  369324 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1014 19:11:13.478528  369324 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.638735586s)
	I1014 19:11:13.478572  369324 crio.go:469] duration metric: took 1.638867818s to extract the tarball
	I1014 19:11:13.478582  369324 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1014 19:11:13.519752  369324 ssh_runner.go:195] Run: sudo crictl images --output json
	I1014 19:11:13.568031  369324 crio.go:514] all images are preloaded for cri-o runtime.
	I1014 19:11:13.568061  369324 cache_images.go:85] Images are preloaded, skipping loading
	I1014 19:11:13.568070  369324 kubeadm.go:934] updating node { 192.168.39.214 8443 v1.34.1 crio true true} ...
	I1014 19:11:13.568184  369324 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-082251 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.214
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:addons-082251 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1014 19:11:13.568295  369324 ssh_runner.go:195] Run: crio config
	I1014 19:11:13.617405  369324 cni.go:84] Creating CNI manager for ""
	I1014 19:11:13.617430  369324 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1014 19:11:13.617452  369324 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1014 19:11:13.617476  369324 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.214 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-082251 NodeName:addons-082251 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.214"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.214 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1014 19:11:13.617634  369324 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.214
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-082251"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.214"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.214"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1014 19:11:13.617700  369324 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1014 19:11:13.630177  369324 binaries.go:44] Found k8s binaries, skipping transfer
	I1014 19:11:13.630292  369324 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1014 19:11:13.642302  369324 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I1014 19:11:13.665015  369324 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1014 19:11:13.685968  369324 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2216 bytes)
	I1014 19:11:13.708196  369324 ssh_runner.go:195] Run: grep 192.168.39.214	control-plane.minikube.internal$ /etc/hosts
	I1014 19:11:13.712633  369324 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.214	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1014 19:11:13.728126  369324 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 19:11:13.878735  369324 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1014 19:11:13.919764  369324 certs.go:69] Setting up /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/addons-082251 for IP: 192.168.39.214
	I1014 19:11:13.919796  369324 certs.go:195] generating shared ca certs ...
	I1014 19:11:13.919820  369324 certs.go:227] acquiring lock for ca certs: {Name:mkddeaa8fb7f14aff32554669329c3967650976a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 19:11:13.920002  369324 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21409-364627/.minikube/ca.key
	I1014 19:11:14.052173  369324 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21409-364627/.minikube/ca.crt ...
	I1014 19:11:14.052209  369324 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-364627/.minikube/ca.crt: {Name:mk972a8c67f12fd01694d59e33520e29116368e7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 19:11:14.052434  369324 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21409-364627/.minikube/ca.key ...
	I1014 19:11:14.052451  369324 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-364627/.minikube/ca.key: {Name:mkb6d4383c24cc6ef4148af610225da5e2c70294 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 19:11:14.052533  369324 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21409-364627/.minikube/proxy-client-ca.key
	I1014 19:11:14.424713  369324 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21409-364627/.minikube/proxy-client-ca.crt ...
	I1014 19:11:14.424750  369324 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-364627/.minikube/proxy-client-ca.crt: {Name:mkad1329802e7cb2a1a5b31a6cf6492d89ee8697 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 19:11:14.424938  369324 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21409-364627/.minikube/proxy-client-ca.key ...
	I1014 19:11:14.424950  369324 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-364627/.minikube/proxy-client-ca.key: {Name:mk572a2b292cc03c803b9227ef88b8ba06d291cb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 19:11:14.425026  369324 certs.go:257] generating profile certs ...
	I1014 19:11:14.425093  369324 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/addons-082251/client.key
	I1014 19:11:14.425114  369324 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/addons-082251/client.crt with IP's: []
	I1014 19:11:14.563735  369324 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/addons-082251/client.crt ...
	I1014 19:11:14.563769  369324 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/addons-082251/client.crt: {Name:mk50fb4db728cfc21204ccf079a348c1de7426c0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 19:11:14.563939  369324 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/addons-082251/client.key ...
	I1014 19:11:14.563950  369324 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/addons-082251/client.key: {Name:mk3f8f4440d7fdfb9436ec4e595375717928155c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 19:11:14.564025  369324 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/addons-082251/apiserver.key.dac3d8a5
	I1014 19:11:14.564046  369324 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/addons-082251/apiserver.crt.dac3d8a5 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.214]
	I1014 19:11:14.635242  369324 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/addons-082251/apiserver.crt.dac3d8a5 ...
	I1014 19:11:14.635275  369324 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/addons-082251/apiserver.crt.dac3d8a5: {Name:mk219c73a74926c1f71c0ba3fa3abe853b67d798 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 19:11:14.635448  369324 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/addons-082251/apiserver.key.dac3d8a5 ...
	I1014 19:11:14.635462  369324 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/addons-082251/apiserver.key.dac3d8a5: {Name:mk9d7ac851850ee15fed68fd6fdce7b6b6ecd007 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 19:11:14.635531  369324 certs.go:382] copying /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/addons-082251/apiserver.crt.dac3d8a5 -> /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/addons-082251/apiserver.crt
	I1014 19:11:14.635635  369324 certs.go:386] copying /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/addons-082251/apiserver.key.dac3d8a5 -> /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/addons-082251/apiserver.key
	I1014 19:11:14.635692  369324 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/addons-082251/proxy-client.key
	I1014 19:11:14.635710  369324 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/addons-082251/proxy-client.crt with IP's: []
	I1014 19:11:15.090375  369324 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/addons-082251/proxy-client.crt ...
	I1014 19:11:15.090404  369324 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/addons-082251/proxy-client.crt: {Name:mk1548107d85782abc9ea1dd312c8bcd1ae1c82b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 19:11:15.090570  369324 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/addons-082251/proxy-client.key ...
	I1014 19:11:15.090596  369324 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/addons-082251/proxy-client.key: {Name:mkf6eca4239d0d56dfc5afe0cffb3bfc2856d622 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 19:11:15.090768  369324 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-364627/.minikube/certs/ca-key.pem (1675 bytes)
	I1014 19:11:15.090803  369324 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-364627/.minikube/certs/ca.pem (1082 bytes)
	I1014 19:11:15.090826  369324 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-364627/.minikube/certs/cert.pem (1123 bytes)
	I1014 19:11:15.090849  369324 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-364627/.minikube/certs/key.pem (1675 bytes)
	I1014 19:11:15.091445  369324 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-364627/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1014 19:11:15.121629  369324 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-364627/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1014 19:11:15.150913  369324 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-364627/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1014 19:11:15.180930  369324 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-364627/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1014 19:11:15.210977  369324 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/addons-082251/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1014 19:11:15.241423  369324 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/addons-082251/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1014 19:11:15.269961  369324 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/addons-082251/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1014 19:11:15.298851  369324 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/addons-082251/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1014 19:11:15.329052  369324 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-364627/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1014 19:11:15.360528  369324 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1014 19:11:15.382029  369324 ssh_runner.go:195] Run: openssl version
	I1014 19:11:15.388687  369324 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1014 19:11:15.401787  369324 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1014 19:11:15.407289  369324 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 14 19:11 /usr/share/ca-certificates/minikubeCA.pem
	I1014 19:11:15.407397  369324 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1014 19:11:15.414910  369324 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1014 19:11:15.428471  369324 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1014 19:11:15.433064  369324 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1014 19:11:15.433127  369324 kubeadm.go:400] StartCluster: {Name:addons-082251 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 C
lusterName:addons-082251 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.214 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disabl
eOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1014 19:11:15.433294  369324 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1014 19:11:15.433417  369324 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1014 19:11:15.475172  369324 cri.go:89] found id: ""
	I1014 19:11:15.475246  369324 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1014 19:11:15.487849  369324 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1014 19:11:15.499959  369324 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1014 19:11:15.514108  369324 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1014 19:11:15.514133  369324 kubeadm.go:157] found existing configuration files:
	
	I1014 19:11:15.514197  369324 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1014 19:11:15.526683  369324 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1014 19:11:15.526748  369324 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1014 19:11:15.540639  369324 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1014 19:11:15.553343  369324 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1014 19:11:15.553428  369324 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1014 19:11:15.569868  369324 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1014 19:11:15.581486  369324 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1014 19:11:15.581559  369324 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1014 19:11:15.593256  369324 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1014 19:11:15.604065  369324 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1014 19:11:15.604135  369324 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1014 19:11:15.615823  369324 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1014 19:11:15.666209  369324 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1014 19:11:15.666343  369324 kubeadm.go:318] [preflight] Running pre-flight checks
	I1014 19:11:15.764964  369324 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1014 19:11:15.765125  369324 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1014 19:11:15.765284  369324 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1014 19:11:15.774772  369324 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1014 19:11:15.927980  369324 out.go:252]   - Generating certificates and keys ...
	I1014 19:11:15.928101  369324 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1014 19:11:15.928158  369324 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1014 19:11:16.087346  369324 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1014 19:11:16.404771  369324 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1014 19:11:16.642267  369324 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1014 19:11:16.816468  369324 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1014 19:11:16.862364  369324 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1014 19:11:16.862937  369324 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [addons-082251 localhost] and IPs [192.168.39.214 127.0.0.1 ::1]
	I1014 19:11:17.060828  369324 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1014 19:11:17.061103  369324 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [addons-082251 localhost] and IPs [192.168.39.214 127.0.0.1 ::1]
	I1014 19:11:17.344464  369324 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1014 19:11:17.860754  369324 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1014 19:11:18.101046  369324 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1014 19:11:18.101139  369324 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1014 19:11:18.202659  369324 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1014 19:11:18.731516  369324 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1014 19:11:18.837427  369324 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1014 19:11:18.880236  369324 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1014 19:11:19.029698  369324 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1014 19:11:19.030277  369324 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1014 19:11:19.032498  369324 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1014 19:11:19.034468  369324 out.go:252]   - Booting up control plane ...
	I1014 19:11:19.034591  369324 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1014 19:11:19.034690  369324 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1014 19:11:19.034800  369324 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1014 19:11:19.052002  369324 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1014 19:11:19.052151  369324 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1014 19:11:19.059129  369324 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1014 19:11:19.059933  369324 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1014 19:11:19.060001  369324 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1014 19:11:19.238993  369324 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1014 19:11:19.239148  369324 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1014 19:11:20.239906  369324 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.001290358s
	I1014 19:11:20.241769  369324 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1014 19:11:20.241882  369324 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.39.214:8443/livez
	I1014 19:11:20.241980  369324 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1014 19:11:20.242070  369324 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1014 19:11:21.948640  369324 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 1.707263144s
	I1014 19:11:23.514144  369324 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 3.275734751s
	I1014 19:11:26.738160  369324 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 6.502245167s
	I1014 19:11:26.751227  369324 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1014 19:11:26.767815  369324 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1014 19:11:26.784834  369324 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1014 19:11:26.785104  369324 kubeadm.go:318] [mark-control-plane] Marking the node addons-082251 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1014 19:11:26.797506  369324 kubeadm.go:318] [bootstrap-token] Using token: 0cpw3i.u5jpmmqb9jo62v3r
	I1014 19:11:26.798790  369324 out.go:252]   - Configuring RBAC rules ...
	I1014 19:11:26.798899  369324 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1014 19:11:26.803650  369324 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1014 19:11:26.816004  369324 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1014 19:11:26.822627  369324 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1014 19:11:26.825946  369324 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1014 19:11:26.830875  369324 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1014 19:11:27.145158  369324 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1014 19:11:27.595277  369324 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1014 19:11:28.143730  369324 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1014 19:11:28.144929  369324 kubeadm.go:318] 
	I1014 19:11:28.145048  369324 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1014 19:11:28.145069  369324 kubeadm.go:318] 
	I1014 19:11:28.145171  369324 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1014 19:11:28.145189  369324 kubeadm.go:318] 
	I1014 19:11:28.145224  369324 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1014 19:11:28.145321  369324 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1014 19:11:28.145413  369324 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1014 19:11:28.145422  369324 kubeadm.go:318] 
	I1014 19:11:28.145512  369324 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1014 19:11:28.145528  369324 kubeadm.go:318] 
	I1014 19:11:28.145611  369324 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1014 19:11:28.145619  369324 kubeadm.go:318] 
	I1014 19:11:28.145681  369324 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1014 19:11:28.145790  369324 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1014 19:11:28.145881  369324 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1014 19:11:28.145891  369324 kubeadm.go:318] 
	I1014 19:11:28.146031  369324 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1014 19:11:28.146181  369324 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1014 19:11:28.146193  369324 kubeadm.go:318] 
	I1014 19:11:28.146338  369324 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token 0cpw3i.u5jpmmqb9jo62v3r \
	I1014 19:11:28.146501  369324 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:a62c4f11982899337eae6d2ac06abdf2a9e3ffb256514381b9172598c708cb1d \
	I1014 19:11:28.146541  369324 kubeadm.go:318] 	--control-plane 
	I1014 19:11:28.146547  369324 kubeadm.go:318] 
	I1014 19:11:28.146673  369324 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1014 19:11:28.146683  369324 kubeadm.go:318] 
	I1014 19:11:28.146795  369324 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token 0cpw3i.u5jpmmqb9jo62v3r \
	I1014 19:11:28.146981  369324 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:a62c4f11982899337eae6d2ac06abdf2a9e3ffb256514381b9172598c708cb1d 
	I1014 19:11:28.148243  369324 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1014 19:11:28.148273  369324 cni.go:84] Creating CNI manager for ""
	I1014 19:11:28.148283  369324 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1014 19:11:28.150190  369324 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I1014 19:11:28.151436  369324 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1014 19:11:28.163949  369324 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1014 19:11:28.188602  369324 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1014 19:11:28.188711  369324 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-082251 minikube.k8s.io/updated_at=2025_10_14T19_11_28_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=93e78e130b0f4e4236c23940daf4ba8b68c76662 minikube.k8s.io/name=addons-082251 minikube.k8s.io/primary=true
	I1014 19:11:28.188712  369324 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 19:11:28.311261  369324 ops.go:34] apiserver oom_adj: -16
	I1014 19:11:28.311343  369324 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 19:11:28.811418  369324 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 19:11:29.312224  369324 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 19:11:29.811731  369324 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 19:11:30.311478  369324 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 19:11:30.811570  369324 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 19:11:31.311715  369324 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 19:11:31.811421  369324 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 19:11:32.311479  369324 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 19:11:32.812166  369324 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 19:11:32.918091  369324 kubeadm.go:1113] duration metric: took 4.729448399s to wait for elevateKubeSystemPrivileges
	I1014 19:11:32.918146  369324 kubeadm.go:402] duration metric: took 17.485021836s to StartCluster
	I1014 19:11:32.918172  369324 settings.go:142] acquiring lock: {Name:mkb488b5c777750ffd68a70b951fb5c68c216ad7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 19:11:32.918381  369324 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21409-364627/kubeconfig
	I1014 19:11:32.918999  369324 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-364627/kubeconfig: {Name:mkd77480770143eefcec102bea219ed6716fd3f5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 19:11:32.919256  369324 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1014 19:11:32.919299  369324 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.214 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1014 19:11:32.919372  369324 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1014 19:11:32.919486  369324 addons.go:69] Setting yakd=true in profile "addons-082251"
	I1014 19:11:32.919512  369324 addons.go:238] Setting addon yakd=true in "addons-082251"
	I1014 19:11:32.919554  369324 host.go:66] Checking if "addons-082251" exists ...
	I1014 19:11:32.919558  369324 addons.go:69] Setting inspektor-gadget=true in profile "addons-082251"
	I1014 19:11:32.919580  369324 addons.go:69] Setting amd-gpu-device-plugin=true in profile "addons-082251"
	I1014 19:11:32.919584  369324 addons.go:69] Setting cloud-spanner=true in profile "addons-082251"
	I1014 19:11:32.919593  369324 addons.go:238] Setting addon amd-gpu-device-plugin=true in "addons-082251"
	I1014 19:11:32.919593  369324 addons.go:69] Setting ingress=true in profile "addons-082251"
	I1014 19:11:32.919603  369324 addons.go:238] Setting addon cloud-spanner=true in "addons-082251"
	I1014 19:11:32.919608  369324 addons.go:69] Setting registry-creds=true in profile "addons-082251"
	I1014 19:11:32.919563  369324 addons.go:69] Setting default-storageclass=true in profile "addons-082251"
	I1014 19:11:32.919619  369324 addons.go:238] Setting addon registry-creds=true in "addons-082251"
	I1014 19:11:32.919624  369324 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-082251"
	I1014 19:11:32.919633  369324 host.go:66] Checking if "addons-082251" exists ...
	I1014 19:11:32.919644  369324 host.go:66] Checking if "addons-082251" exists ...
	I1014 19:11:32.919697  369324 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-082251"
	I1014 19:11:32.919751  369324 addons.go:238] Setting addon csi-hostpath-driver=true in "addons-082251"
	I1014 19:11:32.919774  369324 host.go:66] Checking if "addons-082251" exists ...
	I1014 19:11:32.919840  369324 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-082251"
	I1014 19:11:32.919905  369324 addons.go:69] Setting registry=true in profile "addons-082251"
	I1014 19:11:32.919950  369324 addons.go:238] Setting addon registry=true in "addons-082251"
	I1014 19:11:32.919977  369324 host.go:66] Checking if "addons-082251" exists ...
	I1014 19:11:32.920016  369324 addons.go:238] Setting addon nvidia-device-plugin=true in "addons-082251"
	I1014 19:11:32.920051  369324 host.go:66] Checking if "addons-082251" exists ...
	I1014 19:11:32.919598  369324 addons.go:69] Setting ingress-dns=true in profile "addons-082251"
	I1014 19:11:32.920099  369324 addons.go:238] Setting addon ingress-dns=true in "addons-082251"
	I1014 19:11:32.920101  369324 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 19:11:32.920104  369324 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 19:11:32.919611  369324 addons.go:238] Setting addon ingress=true in "addons-082251"
	I1014 19:11:32.920130  369324 host.go:66] Checking if "addons-082251" exists ...
	I1014 19:11:32.920138  369324 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 19:11:32.920136  369324 addons.go:69] Setting gcp-auth=true in profile "addons-082251"
	I1014 19:11:32.920151  369324 host.go:66] Checking if "addons-082251" exists ...
	I1014 19:11:32.919633  369324 host.go:66] Checking if "addons-082251" exists ...
	I1014 19:11:32.920174  369324 addons.go:69] Setting metrics-server=true in profile "addons-082251"
	I1014 19:11:32.920194  369324 addons.go:238] Setting addon metrics-server=true in "addons-082251"
	I1014 19:11:32.920214  369324 host.go:66] Checking if "addons-082251" exists ...
	I1014 19:11:32.920480  369324 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 19:11:32.920506  369324 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 19:11:32.920521  369324 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 19:11:32.920523  369324 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 19:11:32.920539  369324 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 19:11:32.920546  369324 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 19:11:32.920559  369324 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 19:11:32.920566  369324 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 19:11:32.920576  369324 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 19:11:32.920577  369324 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 19:11:32.920600  369324 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 19:11:32.920139  369324 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 19:11:32.920150  369324 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 19:11:32.920658  369324 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-082251"
	I1014 19:11:32.920674  369324 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-082251"
	I1014 19:11:32.920676  369324 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 19:11:32.919564  369324 config.go:182] Loaded profile config "addons-082251": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1014 19:11:32.920682  369324 addons.go:69] Setting storage-provisioner=true in profile "addons-082251"
	I1014 19:11:32.920693  369324 addons.go:69] Setting volcano=true in profile "addons-082251"
	I1014 19:11:32.920694  369324 addons.go:238] Setting addon storage-provisioner=true in "addons-082251"
	I1014 19:11:32.920708  369324 addons.go:69] Setting volumesnapshots=true in profile "addons-082251"
	I1014 19:11:32.920709  369324 addons.go:238] Setting addon volcano=true in "addons-082251"
	I1014 19:11:32.920155  369324 mustload.go:65] Loading cluster: addons-082251
	I1014 19:11:32.920717  369324 addons.go:238] Setting addon volumesnapshots=true in "addons-082251"
	I1014 19:11:32.920529  369324 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 19:11:32.919588  369324 addons.go:238] Setting addon inspektor-gadget=true in "addons-082251"
	I1014 19:11:32.920839  369324 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 19:11:32.920865  369324 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 19:11:32.920920  369324 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 19:11:32.920960  369324 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 19:11:32.921096  369324 host.go:66] Checking if "addons-082251" exists ...
	I1014 19:11:32.921150  369324 host.go:66] Checking if "addons-082251" exists ...
	I1014 19:11:32.921489  369324 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 19:11:32.921526  369324 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 19:11:32.921575  369324 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 19:11:32.921605  369324 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 19:11:32.921643  369324 config.go:182] Loaded profile config "addons-082251": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1014 19:11:32.921799  369324 host.go:66] Checking if "addons-082251" exists ...
	I1014 19:11:32.922015  369324 host.go:66] Checking if "addons-082251" exists ...
	I1014 19:11:32.922163  369324 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 19:11:32.922199  369324 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 19:11:32.922424  369324 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 19:11:32.922462  369324 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 19:11:32.923657  369324 out.go:179] * Verifying Kubernetes components...
	I1014 19:11:32.925245  369324 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 19:11:32.936628  369324 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 19:11:32.936695  369324 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 19:11:32.939845  369324 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 19:11:32.939901  369324 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 19:11:32.940839  369324 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44057
	I1014 19:11:32.941016  369324 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35883
	I1014 19:11:32.944656  369324 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36299
	I1014 19:11:32.944841  369324 main.go:141] libmachine: () Calling .GetVersion
	I1014 19:11:32.945476  369324 main.go:141] libmachine: Using API Version  1
	I1014 19:11:32.945491  369324 main.go:141] libmachine: () Calling .GetVersion
	I1014 19:11:32.945498  369324 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 19:11:32.945918  369324 main.go:141] libmachine: () Calling .GetMachineName
	I1014 19:11:32.946598  369324 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 19:11:32.948211  369324 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 19:11:32.948050  369324 main.go:141] libmachine: Using API Version  1
	I1014 19:11:32.948351  369324 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 19:11:32.948100  369324 main.go:141] libmachine: () Calling .GetVersion
	I1014 19:11:32.948782  369324 main.go:141] libmachine: () Calling .GetMachineName
	I1014 19:11:32.949681  369324 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 19:11:32.949740  369324 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 19:11:32.950127  369324 main.go:141] libmachine: Using API Version  1
	I1014 19:11:32.950146  369324 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 19:11:32.950606  369324 main.go:141] libmachine: () Calling .GetMachineName
	I1014 19:11:32.951622  369324 main.go:141] libmachine: (addons-082251) Calling .GetState
	I1014 19:11:32.956983  369324 addons.go:238] Setting addon default-storageclass=true in "addons-082251"
	I1014 19:11:32.957040  369324 host.go:66] Checking if "addons-082251" exists ...
	I1014 19:11:32.957493  369324 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 19:11:32.957537  369324 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 19:11:32.966476  369324 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36279
	I1014 19:11:32.968663  369324 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45533
	I1014 19:11:32.969232  369324 main.go:141] libmachine: () Calling .GetVersion
	I1014 19:11:32.969810  369324 main.go:141] libmachine: Using API Version  1
	I1014 19:11:32.969834  369324 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 19:11:32.970293  369324 main.go:141] libmachine: () Calling .GetMachineName
	I1014 19:11:32.970948  369324 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 19:11:32.970991  369324 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 19:11:32.971205  369324 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37581
	I1014 19:11:32.972610  369324 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45733
	I1014 19:11:32.972914  369324 main.go:141] libmachine: () Calling .GetVersion
	I1014 19:11:32.973650  369324 main.go:141] libmachine: Using API Version  1
	I1014 19:11:32.973669  369324 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 19:11:32.973750  369324 main.go:141] libmachine: () Calling .GetVersion
	I1014 19:11:32.974220  369324 main.go:141] libmachine: () Calling .GetMachineName
	I1014 19:11:32.974411  369324 main.go:141] libmachine: Using API Version  1
	I1014 19:11:32.974521  369324 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 19:11:32.975141  369324 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 19:11:32.975207  369324 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 19:11:32.975516  369324 main.go:141] libmachine: () Calling .GetMachineName
	I1014 19:11:32.976027  369324 main.go:141] libmachine: () Calling .GetVersion
	I1014 19:11:32.976349  369324 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 19:11:32.976401  369324 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 19:11:32.976502  369324 main.go:141] libmachine: Using API Version  1
	I1014 19:11:32.976518  369324 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 19:11:32.976989  369324 main.go:141] libmachine: () Calling .GetMachineName
	I1014 19:11:32.977596  369324 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 19:11:32.977638  369324 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 19:11:32.979259  369324 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35333
	I1014 19:11:32.979718  369324 main.go:141] libmachine: () Calling .GetVersion
	I1014 19:11:32.980135  369324 main.go:141] libmachine: Using API Version  1
	I1014 19:11:32.980149  369324 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 19:11:32.980520  369324 main.go:141] libmachine: () Calling .GetMachineName
	I1014 19:11:32.981061  369324 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 19:11:32.981117  369324 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 19:11:32.987232  369324 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36913
	I1014 19:11:32.987264  369324 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38695
	I1014 19:11:32.988052  369324 main.go:141] libmachine: () Calling .GetVersion
	I1014 19:11:32.996519  369324 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38601
	I1014 19:11:32.996533  369324 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41583
	I1014 19:11:32.996517  369324 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45317
	I1014 19:11:32.996688  369324 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36785
	I1014 19:11:32.996726  369324 main.go:141] libmachine: Using API Version  1
	I1014 19:11:32.996741  369324 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38427
	I1014 19:11:32.996805  369324 main.go:141] libmachine: () Calling .GetVersion
	I1014 19:11:32.997020  369324 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 19:11:32.997616  369324 main.go:141] libmachine: () Calling .GetVersion
	I1014 19:11:32.997726  369324 main.go:141] libmachine: () Calling .GetVersion
	I1014 19:11:32.997807  369324 main.go:141] libmachine: () Calling .GetVersion
	I1014 19:11:32.997906  369324 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35247
	I1014 19:11:32.997949  369324 main.go:141] libmachine: Using API Version  1
	I1014 19:11:32.997976  369324 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 19:11:32.999993  369324 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43201
	I1014 19:11:33.000046  369324 main.go:141] libmachine: Using API Version  1
	I1014 19:11:33.000065  369324 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 19:11:33.000208  369324 main.go:141] libmachine: Using API Version  1
	I1014 19:11:33.000251  369324 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 19:11:33.000299  369324 main.go:141] libmachine: Using API Version  1
	I1014 19:11:33.000328  369324 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 19:11:33.000399  369324 main.go:141] libmachine: () Calling .GetMachineName
	I1014 19:11:33.000457  369324 main.go:141] libmachine: () Calling .GetVersion
	I1014 19:11:33.000537  369324 main.go:141] libmachine: () Calling .GetMachineName
	I1014 19:11:33.000556  369324 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41601
	I1014 19:11:33.001327  369324 main.go:141] libmachine: () Calling .GetMachineName
	I1014 19:11:33.001361  369324 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44071
	I1014 19:11:33.001364  369324 main.go:141] libmachine: () Calling .GetVersion
	I1014 19:11:33.001399  369324 main.go:141] libmachine: Using API Version  1
	I1014 19:11:33.001414  369324 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 19:11:33.001480  369324 main.go:141] libmachine: () Calling .GetVersion
	I1014 19:11:33.001941  369324 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 19:11:33.001980  369324 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 19:11:33.002448  369324 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 19:11:33.002487  369324 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 19:11:33.002492  369324 main.go:141] libmachine: () Calling .GetMachineName
	I1014 19:11:33.002584  369324 main.go:141] libmachine: () Calling .GetVersion
	I1014 19:11:33.002810  369324 main.go:141] libmachine: Using API Version  1
	I1014 19:11:33.002823  369324 main.go:141] libmachine: () Calling .GetVersion
	I1014 19:11:33.002846  369324 main.go:141] libmachine: Using API Version  1
	I1014 19:11:33.002861  369324 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 19:11:33.002825  369324 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 19:11:33.002948  369324 main.go:141] libmachine: () Calling .GetMachineName
	I1014 19:11:33.003372  369324 main.go:141] libmachine: () Calling .GetMachineName
	I1014 19:11:33.003592  369324 main.go:141] libmachine: Using API Version  1
	I1014 19:11:33.003605  369324 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 19:11:33.003672  369324 main.go:141] libmachine: Using API Version  1
	I1014 19:11:33.003688  369324 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 19:11:33.003767  369324 main.go:141] libmachine: () Calling .GetMachineName
	I1014 19:11:33.004036  369324 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 19:11:33.004070  369324 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 19:11:33.004604  369324 main.go:141] libmachine: () Calling .GetMachineName
	I1014 19:11:33.004609  369324 main.go:141] libmachine: () Calling .GetVersion
	I1014 19:11:33.004663  369324 main.go:141] libmachine: () Calling .GetMachineName
	I1014 19:11:33.004760  369324 main.go:141] libmachine: () Calling .GetMachineName
	I1014 19:11:33.004936  369324 main.go:141] libmachine: (addons-082251) Calling .GetState
	I1014 19:11:33.004993  369324 main.go:141] libmachine: (addons-082251) Calling .GetState
	I1014 19:11:33.005063  369324 main.go:141] libmachine: Using API Version  1
	I1014 19:11:33.005081  369324 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 19:11:33.005383  369324 main.go:141] libmachine: (addons-082251) Calling .GetState
	I1014 19:11:33.006370  369324 main.go:141] libmachine: () Calling .GetMachineName
	I1014 19:11:33.006799  369324 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 19:11:33.006859  369324 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 19:11:33.007007  369324 main.go:141] libmachine: (addons-082251) Calling .GetState
	I1014 19:11:33.009236  369324 host.go:66] Checking if "addons-082251" exists ...
	I1014 19:11:33.011923  369324 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39623
	I1014 19:11:33.012389  369324 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34057
	I1014 19:11:33.012761  369324 main.go:141] libmachine: (addons-082251) Calling .DriverName
	I1014 19:11:33.013612  369324 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 19:11:33.013664  369324 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 19:11:33.013734  369324 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 19:11:33.013769  369324 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 19:11:33.013901  369324 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33007
	I1014 19:11:33.014101  369324 main.go:141] libmachine: (addons-082251) Calling .DriverName
	I1014 19:11:33.014843  369324 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1014 19:11:33.014873  369324 addons.go:238] Setting addon storage-provisioner-rancher=true in "addons-082251"
	I1014 19:11:33.014882  369324 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 19:11:33.014915  369324 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 19:11:33.014917  369324 host.go:66] Checking if "addons-082251" exists ...
	I1014 19:11:33.015282  369324 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 19:11:33.015348  369324 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 19:11:33.015847  369324 main.go:141] libmachine: () Calling .GetVersion
	I1014 19:11:33.016268  369324 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 19:11:33.016296  369324 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 19:11:33.016550  369324 addons.go:435] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1014 19:11:33.016568  369324 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1014 19:11:33.016588  369324 main.go:141] libmachine: (addons-082251) Calling .GetSSHHostname
	I1014 19:11:33.017023  369324 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1014 19:11:33.017498  369324 main.go:141] libmachine: () Calling .GetVersion
	I1014 19:11:33.017604  369324 main.go:141] libmachine: () Calling .GetVersion
	I1014 19:11:33.018103  369324 main.go:141] libmachine: Using API Version  1
	I1014 19:11:33.018122  369324 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 19:11:33.018968  369324 addons.go:435] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1014 19:11:33.018994  369324 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1014 19:11:33.019012  369324 main.go:141] libmachine: (addons-082251) Calling .GetSSHHostname
	I1014 19:11:33.019040  369324 main.go:141] libmachine: Using API Version  1
	I1014 19:11:33.019054  369324 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 19:11:33.019778  369324 main.go:141] libmachine: () Calling .GetMachineName
	I1014 19:11:33.020105  369324 main.go:141] libmachine: (addons-082251) Calling .GetState
	I1014 19:11:33.021931  369324 main.go:141] libmachine: () Calling .GetMachineName
	I1014 19:11:33.022098  369324 main.go:141] libmachine: Using API Version  1
	I1014 19:11:33.022113  369324 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 19:11:33.022556  369324 main.go:141] libmachine: () Calling .GetMachineName
	I1014 19:11:33.023184  369324 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 19:11:33.023219  369324 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 19:11:33.023650  369324 main.go:141] libmachine: (addons-082251) Calling .GetState
	I1014 19:11:33.023976  369324 main.go:141] libmachine: (addons-082251) DBG | domain addons-082251 has defined MAC address 52:54:00:84:65:a3 in network mk-addons-082251
	I1014 19:11:33.027409  369324 main.go:141] libmachine: (addons-082251) Calling .DriverName
	I1014 19:11:33.027484  369324 main.go:141] libmachine: (addons-082251) Calling .GetSSHPort
	I1014 19:11:33.027555  369324 main.go:141] libmachine: (addons-082251) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:65:a3", ip: ""} in network mk-addons-082251: {Iface:virbr1 ExpiryTime:2025-10-14 20:11:06 +0000 UTC Type:0 Mac:52:54:00:84:65:a3 Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:addons-082251 Clientid:01:52:54:00:84:65:a3}
	I1014 19:11:33.027571  369324 main.go:141] libmachine: (addons-082251) DBG | domain addons-082251 has defined IP address 192.168.39.214 and MAC address 52:54:00:84:65:a3 in network mk-addons-082251
	I1014 19:11:33.030339  369324 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46773
	I1014 19:11:33.030547  369324 main.go:141] libmachine: (addons-082251) DBG | domain addons-082251 has defined MAC address 52:54:00:84:65:a3 in network mk-addons-082251
	I1014 19:11:33.030646  369324 main.go:141] libmachine: (addons-082251) Calling .DriverName
	I1014 19:11:33.030707  369324 main.go:141] libmachine: (addons-082251) Calling .GetSSHKeyPath
	I1014 19:11:33.030936  369324 main.go:141] libmachine: (addons-082251) Calling .GetSSHUsername
	I1014 19:11:33.031119  369324 sshutil.go:53] new ssh client: &{IP:192.168.39.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21409-364627/.minikube/machines/addons-082251/id_rsa Username:docker}
	I1014 19:11:33.031889  369324 main.go:141] libmachine: () Calling .GetVersion
	I1014 19:11:33.032084  369324 main.go:141] libmachine: (addons-082251) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:65:a3", ip: ""} in network mk-addons-082251: {Iface:virbr1 ExpiryTime:2025-10-14 20:11:06 +0000 UTC Type:0 Mac:52:54:00:84:65:a3 Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:addons-082251 Clientid:01:52:54:00:84:65:a3}
	I1014 19:11:33.032102  369324 main.go:141] libmachine: (addons-082251) DBG | domain addons-082251 has defined IP address 192.168.39.214 and MAC address 52:54:00:84:65:a3 in network mk-addons-082251
	I1014 19:11:33.032631  369324 main.go:141] libmachine: Using API Version  1
	I1014 19:11:33.032658  369324 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 19:11:33.032731  369324 main.go:141] libmachine: (addons-082251) Calling .GetSSHPort
	I1014 19:11:33.032933  369324 main.go:141] libmachine: (addons-082251) Calling .GetSSHKeyPath
	I1014 19:11:33.033122  369324 main.go:141] libmachine: (addons-082251) Calling .GetSSHUsername
	I1014 19:11:33.033188  369324 main.go:141] libmachine: () Calling .GetMachineName
	I1014 19:11:33.033381  369324 sshutil.go:53] new ssh client: &{IP:192.168.39.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21409-364627/.minikube/machines/addons-082251/id_rsa Username:docker}
	I1014 19:11:33.033549  369324 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.17.4
	I1014 19:11:33.034861  369324 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1014 19:11:33.035016  369324 addons.go:435] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1014 19:11:33.035031  369324 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1014 19:11:33.035051  369324 main.go:141] libmachine: (addons-082251) Calling .GetSSHHostname
	I1014 19:11:33.035845  369324 addons.go:435] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1014 19:11:33.035865  369324 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1014 19:11:33.035886  369324 main.go:141] libmachine: (addons-082251) Calling .GetSSHHostname
	I1014 19:11:33.041078  369324 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44133
	I1014 19:11:33.041103  369324 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33105
	I1014 19:11:33.041282  369324 main.go:141] libmachine: (addons-082251) Calling .GetState
	I1014 19:11:33.041937  369324 main.go:141] libmachine: () Calling .GetVersion
	I1014 19:11:33.042445  369324 main.go:141] libmachine: () Calling .GetVersion
	I1014 19:11:33.043007  369324 main.go:141] libmachine: Using API Version  1
	I1014 19:11:33.043029  369324 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 19:11:33.043353  369324 main.go:141] libmachine: Using API Version  1
	I1014 19:11:33.043374  369324 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 19:11:33.044103  369324 main.go:141] libmachine: () Calling .GetMachineName
	I1014 19:11:33.044175  369324 main.go:141] libmachine: (addons-082251) DBG | domain addons-082251 has defined MAC address 52:54:00:84:65:a3 in network mk-addons-082251
	I1014 19:11:33.044482  369324 main.go:141] libmachine: (addons-082251) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:65:a3", ip: ""} in network mk-addons-082251: {Iface:virbr1 ExpiryTime:2025-10-14 20:11:06 +0000 UTC Type:0 Mac:52:54:00:84:65:a3 Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:addons-082251 Clientid:01:52:54:00:84:65:a3}
	I1014 19:11:33.044503  369324 main.go:141] libmachine: (addons-082251) DBG | domain addons-082251 has defined IP address 192.168.39.214 and MAC address 52:54:00:84:65:a3 in network mk-addons-082251
	I1014 19:11:33.044634  369324 main.go:141] libmachine: (addons-082251) Calling .GetState
	I1014 19:11:33.044824  369324 main.go:141] libmachine: () Calling .GetMachineName
	I1014 19:11:33.045228  369324 main.go:141] libmachine: (addons-082251) DBG | domain addons-082251 has defined MAC address 52:54:00:84:65:a3 in network mk-addons-082251
	I1014 19:11:33.045269  369324 main.go:141] libmachine: (addons-082251) Calling .GetSSHPort
	I1014 19:11:33.045587  369324 main.go:141] libmachine: (addons-082251) Calling .GetState
	I1014 19:11:33.046246  369324 main.go:141] libmachine: (addons-082251) Calling .GetSSHKeyPath
	I1014 19:11:33.046482  369324 main.go:141] libmachine: (addons-082251) Calling .GetSSHUsername
	I1014 19:11:33.046731  369324 sshutil.go:53] new ssh client: &{IP:192.168.39.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21409-364627/.minikube/machines/addons-082251/id_rsa Username:docker}
	I1014 19:11:33.048630  369324 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42573
	I1014 19:11:33.048821  369324 main.go:141] libmachine: (addons-082251) Calling .DriverName
	I1014 19:11:33.050392  369324 main.go:141] libmachine: (addons-082251) Calling .DriverName
	I1014 19:11:33.050476  369324 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35101
	I1014 19:11:33.050600  369324 main.go:141] libmachine: (addons-082251) Calling .DriverName
	I1014 19:11:33.050794  369324 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1014 19:11:33.051177  369324 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1014 19:11:33.051220  369324 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1014 19:11:33.051241  369324 main.go:141] libmachine: (addons-082251) Calling .GetSSHHostname
	I1014 19:11:33.051210  369324 main.go:141] libmachine: () Calling .GetVersion
	I1014 19:11:33.051331  369324 main.go:141] libmachine: () Calling .GetVersion
	I1014 19:11:33.052084  369324 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1014 19:11:33.052105  369324 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1014 19:11:33.052126  369324 main.go:141] libmachine: (addons-082251) Calling .GetSSHHostname
	I1014 19:11:33.052927  369324 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1014 19:11:33.054582  369324 main.go:141] libmachine: (addons-082251) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:65:a3", ip: ""} in network mk-addons-082251: {Iface:virbr1 ExpiryTime:2025-10-14 20:11:06 +0000 UTC Type:0 Mac:52:54:00:84:65:a3 Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:addons-082251 Clientid:01:52:54:00:84:65:a3}
	I1014 19:11:33.054607  369324 main.go:141] libmachine: (addons-082251) DBG | domain addons-082251 has defined IP address 192.168.39.214 and MAC address 52:54:00:84:65:a3 in network mk-addons-082251
	I1014 19:11:33.054654  369324 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45871
	I1014 19:11:33.054712  369324 main.go:141] libmachine: Using API Version  1
	I1014 19:11:33.054731  369324 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 19:11:33.054889  369324 main.go:141] libmachine: Using API Version  1
	I1014 19:11:33.054907  369324 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 19:11:33.055678  369324 main.go:141] libmachine: (addons-082251) Calling .GetSSHPort
	I1014 19:11:33.055854  369324 main.go:141] libmachine: (addons-082251) Calling .GetSSHKeyPath
	I1014 19:11:33.055920  369324 main.go:141] libmachine: () Calling .GetMachineName
	I1014 19:11:33.056235  369324 main.go:141] libmachine: (addons-082251) Calling .GetState
	I1014 19:11:33.056304  369324 main.go:141] libmachine: (addons-082251) Calling .GetSSHUsername
	I1014 19:11:33.056419  369324 main.go:141] libmachine: () Calling .GetVersion
	I1014 19:11:33.056583  369324 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39331
	I1014 19:11:33.056977  369324 sshutil.go:53] new ssh client: &{IP:192.168.39.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21409-364627/.minikube/machines/addons-082251/id_rsa Username:docker}
	I1014 19:11:33.057066  369324 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1014 19:11:33.057490  369324 main.go:141] libmachine: Using API Version  1
	I1014 19:11:33.057507  369324 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 19:11:33.057516  369324 main.go:141] libmachine: () Calling .GetVersion
	I1014 19:11:33.057928  369324 main.go:141] libmachine: () Calling .GetMachineName
	I1014 19:11:33.058532  369324 main.go:141] libmachine: Using API Version  1
	I1014 19:11:33.058553  369324 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 19:11:33.058621  369324 main.go:141] libmachine: (addons-082251) Calling .GetState
	I1014 19:11:33.058673  369324 main.go:141] libmachine: () Calling .GetMachineName
	I1014 19:11:33.058893  369324 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44159
	I1014 19:11:33.059246  369324 main.go:141] libmachine: (addons-082251) Calling .GetState
	I1014 19:11:33.059634  369324 main.go:141] libmachine: (addons-082251) Calling .DriverName
	I1014 19:11:33.060281  369324 main.go:141] libmachine: () Calling .GetMachineName
	I1014 19:11:33.060785  369324 main.go:141] libmachine: () Calling .GetVersion
	I1014 19:11:33.061285  369324 main.go:141] libmachine: Using API Version  1
	I1014 19:11:33.061307  369324 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 19:11:33.061434  369324 main.go:141] libmachine: (addons-082251) DBG | domain addons-082251 has defined MAC address 52:54:00:84:65:a3 in network mk-addons-082251
	I1014 19:11:33.061489  369324 main.go:141] libmachine: (addons-082251) Calling .GetState
	I1014 19:11:33.061770  369324 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1014 19:11:33.061911  369324 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.42
	I1014 19:11:33.061912  369324 main.go:141] libmachine: () Calling .GetMachineName
	I1014 19:11:33.062270  369324 main.go:141] libmachine: (addons-082251) Calling .DriverName
	I1014 19:11:33.063284  369324 main.go:141] libmachine: (addons-082251) DBG | domain addons-082251 has defined MAC address 52:54:00:84:65:a3 in network mk-addons-082251
	I1014 19:11:33.063492  369324 main.go:141] libmachine: (addons-082251) Calling .DriverName
	I1014 19:11:33.063922  369324 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42745
	I1014 19:11:33.063996  369324 addons.go:435] installing /etc/kubernetes/addons/deployment.yaml
	I1014 19:11:33.063992  369324 main.go:141] libmachine: (addons-082251) Calling .DriverName
	I1014 19:11:33.064016  369324 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1014 19:11:33.064082  369324 main.go:141] libmachine: (addons-082251) Calling .GetSSHHostname
	I1014 19:11:33.064165  369324 main.go:141] libmachine: (addons-082251) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:65:a3", ip: ""} in network mk-addons-082251: {Iface:virbr1 ExpiryTime:2025-10-14 20:11:06 +0000 UTC Type:0 Mac:52:54:00:84:65:a3 Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:addons-082251 Clientid:01:52:54:00:84:65:a3}
	I1014 19:11:33.064183  369324 main.go:141] libmachine: (addons-082251) DBG | domain addons-082251 has defined IP address 192.168.39.214 and MAC address 52:54:00:84:65:a3 in network mk-addons-082251
	I1014 19:11:33.064455  369324 main.go:141] libmachine: (addons-082251) Calling .GetSSHPort
	I1014 19:11:33.064488  369324 main.go:141] libmachine: (addons-082251) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:65:a3", ip: ""} in network mk-addons-082251: {Iface:virbr1 ExpiryTime:2025-10-14 20:11:06 +0000 UTC Type:0 Mac:52:54:00:84:65:a3 Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:addons-082251 Clientid:01:52:54:00:84:65:a3}
	I1014 19:11:33.064542  369324 main.go:141] libmachine: (addons-082251) DBG | domain addons-082251 has defined IP address 192.168.39.214 and MAC address 52:54:00:84:65:a3 in network mk-addons-082251
	I1014 19:11:33.065120  369324 main.go:141] libmachine: (addons-082251) Calling .GetSSHKeyPath
	I1014 19:11:33.065239  369324 main.go:141] libmachine: (addons-082251) Calling .GetSSHPort
	I1014 19:11:33.065428  369324 main.go:141] libmachine: (addons-082251) Calling .GetSSHUsername
	I1014 19:11:33.065568  369324 main.go:141] libmachine: (addons-082251) Calling .GetSSHKeyPath
	I1014 19:11:33.065674  369324 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1014 19:11:33.065727  369324 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1014 19:11:33.065813  369324 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1014 19:11:33.066239  369324 main.go:141] libmachine: () Calling .GetVersion
	I1014 19:11:33.066358  369324 main.go:141] libmachine: (addons-082251) Calling .GetSSHUsername
	I1014 19:11:33.066407  369324 sshutil.go:53] new ssh client: &{IP:192.168.39.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21409-364627/.minikube/machines/addons-082251/id_rsa Username:docker}
	I1014 19:11:33.066770  369324 main.go:141] libmachine: (addons-082251) Calling .DriverName
	I1014 19:11:33.067606  369324 addons.go:435] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1014 19:11:33.067632  369324 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1014 19:11:33.067650  369324 main.go:141] libmachine: (addons-082251) Calling .GetSSHHostname
	I1014 19:11:33.067709  369324 sshutil.go:53] new ssh client: &{IP:192.168.39.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21409-364627/.minikube/machines/addons-082251/id_rsa Username:docker}
	I1014 19:11:33.067946  369324 main.go:141] libmachine: Using API Version  1
	I1014 19:11:33.067966  369324 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 19:11:33.068089  369324 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.13.3
	I1014 19:11:33.068093  369324 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1014 19:11:33.068848  369324 main.go:141] libmachine: () Calling .GetMachineName
	I1014 19:11:33.069000  369324 main.go:141] libmachine: (addons-082251) Calling .GetState
	I1014 19:11:33.069091  369324 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1014 19:11:33.069425  369324 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36001
	I1014 19:11:33.069997  369324 main.go:141] libmachine: () Calling .GetVersion
	I1014 19:11:33.070605  369324 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1014 19:11:33.070703  369324 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44379
	I1014 19:11:33.071071  369324 main.go:141] libmachine: Using API Version  1
	I1014 19:11:33.071794  369324 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 19:11:33.071185  369324 main.go:141] libmachine: (addons-082251) DBG | domain addons-082251 has defined MAC address 52:54:00:84:65:a3 in network mk-addons-082251
	I1014 19:11:33.071445  369324 main.go:141] libmachine: () Calling .GetVersion
	I1014 19:11:33.072393  369324 addons.go:435] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1014 19:11:33.071456  369324 out.go:179]   - Using image docker.io/registry:3.0.0
	I1014 19:11:33.072087  369324 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1014 19:11:33.072396  369324 main.go:141] libmachine: () Calling .GetMachineName
	I1014 19:11:33.072967  369324 main.go:141] libmachine: (addons-082251) Calling .GetState
	I1014 19:11:33.073612  369324 main.go:141] libmachine: (addons-082251) Calling .GetSSHPort
	I1014 19:11:33.072718  369324 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1014 19:11:33.073693  369324 main.go:141] libmachine: (addons-082251) Calling .GetSSHHostname
	I1014 19:11:33.073712  369324 main.go:141] libmachine: (addons-082251) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:65:a3", ip: ""} in network mk-addons-082251: {Iface:virbr1 ExpiryTime:2025-10-14 20:11:06 +0000 UTC Type:0 Mac:52:54:00:84:65:a3 Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:addons-082251 Clientid:01:52:54:00:84:65:a3}
	I1014 19:11:33.073913  369324 addons.go:435] installing /etc/kubernetes/addons/registry-rc.yaml
	I1014 19:11:33.073931  369324 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1014 19:11:33.073948  369324 main.go:141] libmachine: Using API Version  1
	I1014 19:11:33.073952  369324 main.go:141] libmachine: (addons-082251) Calling .GetSSHHostname
	I1014 19:11:33.074021  369324 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41005
	I1014 19:11:33.072801  369324 main.go:141] libmachine: (addons-082251) Calling .DriverName
	I1014 19:11:33.074046  369324 main.go:141] libmachine: (addons-082251) DBG | domain addons-082251 has defined IP address 192.168.39.214 and MAC address 52:54:00:84:65:a3 in network mk-addons-082251
	I1014 19:11:33.073797  369324 main.go:141] libmachine: (addons-082251) Calling .GetSSHKeyPath
	I1014 19:11:33.073970  369324 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 19:11:33.075002  369324 main.go:141] libmachine: () Calling .GetVersion
	I1014 19:11:33.074687  369324 main.go:141] libmachine: () Calling .GetMachineName
	I1014 19:11:33.074694  369324 main.go:141] libmachine: (addons-082251) Calling .GetSSHUsername
	I1014 19:11:33.075232  369324 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1014 19:11:33.075565  369324 main.go:141] libmachine: (addons-082251) DBG | domain addons-082251 has defined MAC address 52:54:00:84:65:a3 in network mk-addons-082251
	I1014 19:11:33.075573  369324 main.go:141] libmachine: (addons-082251) Calling .GetState
	I1014 19:11:33.075721  369324 sshutil.go:53] new ssh client: &{IP:192.168.39.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21409-364627/.minikube/machines/addons-082251/id_rsa Username:docker}
	I1014 19:11:33.075759  369324 main.go:141] libmachine: Using API Version  1
	I1014 19:11:33.075775  369324 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 19:11:33.076162  369324 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37195
	I1014 19:11:33.076244  369324 main.go:141] libmachine: () Calling .GetMachineName
	I1014 19:11:33.076426  369324 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1014 19:11:33.076822  369324 main.go:141] libmachine: (addons-082251) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:65:a3", ip: ""} in network mk-addons-082251: {Iface:virbr1 ExpiryTime:2025-10-14 20:11:06 +0000 UTC Type:0 Mac:52:54:00:84:65:a3 Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:addons-082251 Clientid:01:52:54:00:84:65:a3}
	I1014 19:11:33.076843  369324 main.go:141] libmachine: (addons-082251) DBG | domain addons-082251 has defined IP address 192.168.39.214 and MAC address 52:54:00:84:65:a3 in network mk-addons-082251
	I1014 19:11:33.077003  369324 main.go:141] libmachine: () Calling .GetVersion
	I1014 19:11:33.077102  369324 main.go:141] libmachine: (addons-082251) Calling .DriverName
	I1014 19:11:33.077219  369324 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 19:11:33.077326  369324 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 19:11:33.077427  369324 main.go:141] libmachine: Making call to close driver server
	I1014 19:11:33.077586  369324 main.go:141] libmachine: (addons-082251) Calling .GetSSHPort
	I1014 19:11:33.077647  369324 main.go:141] libmachine: (addons-082251) Calling .Close
	I1014 19:11:33.077659  369324 main.go:141] libmachine: Using API Version  1
	I1014 19:11:33.077672  369324 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 19:11:33.077739  369324 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1014 19:11:33.077821  369324 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1014 19:11:33.077839  369324 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1014 19:11:33.077858  369324 main.go:141] libmachine: (addons-082251) Calling .GetSSHHostname
	I1014 19:11:33.077959  369324 main.go:141] libmachine: (addons-082251) DBG | Closing plugin on server side
	I1014 19:11:33.077993  369324 main.go:141] libmachine: Successfully made call to close driver server
	I1014 19:11:33.078006  369324 main.go:141] libmachine: Making call to close connection to plugin binary
	I1014 19:11:33.078014  369324 main.go:141] libmachine: Making call to close driver server
	I1014 19:11:33.078020  369324 main.go:141] libmachine: (addons-082251) Calling .Close
	I1014 19:11:33.078104  369324 main.go:141] libmachine: (addons-082251) Calling .GetSSHKeyPath
	I1014 19:11:33.078289  369324 main.go:141] libmachine: (addons-082251) Calling .GetSSHUsername
	I1014 19:11:33.078468  369324 main.go:141] libmachine: Successfully made call to close driver server
	I1014 19:11:33.078479  369324 main.go:141] libmachine: Making call to close connection to plugin binary
	I1014 19:11:33.078659  369324 sshutil.go:53] new ssh client: &{IP:192.168.39.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21409-364627/.minikube/machines/addons-082251/id_rsa Username:docker}
	W1014 19:11:33.079222  369324 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1014 19:11:33.079608  369324 addons.go:435] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1014 19:11:33.079692  369324 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1014 19:11:33.079724  369324 main.go:141] libmachine: (addons-082251) Calling .GetSSHHostname
	I1014 19:11:33.079953  369324 main.go:141] libmachine: () Calling .GetMachineName
	I1014 19:11:33.080252  369324 main.go:141] libmachine: (addons-082251) Calling .GetState
	I1014 19:11:33.081185  369324 main.go:141] libmachine: (addons-082251) DBG | domain addons-082251 has defined MAC address 52:54:00:84:65:a3 in network mk-addons-082251
	I1014 19:11:33.082450  369324 main.go:141] libmachine: (addons-082251) Calling .DriverName
	I1014 19:11:33.082516  369324 main.go:141] libmachine: (addons-082251) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:65:a3", ip: ""} in network mk-addons-082251: {Iface:virbr1 ExpiryTime:2025-10-14 20:11:06 +0000 UTC Type:0 Mac:52:54:00:84:65:a3 Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:addons-082251 Clientid:01:52:54:00:84:65:a3}
	I1014 19:11:33.082546  369324 main.go:141] libmachine: (addons-082251) DBG | domain addons-082251 has defined IP address 192.168.39.214 and MAC address 52:54:00:84:65:a3 in network mk-addons-082251
	I1014 19:11:33.083058  369324 main.go:141] libmachine: (addons-082251) Calling .GetSSHPort
	I1014 19:11:33.083304  369324 main.go:141] libmachine: (addons-082251) Calling .GetSSHKeyPath
	I1014 19:11:33.083582  369324 main.go:141] libmachine: (addons-082251) Calling .GetSSHUsername
	I1014 19:11:33.083755  369324 sshutil.go:53] new ssh client: &{IP:192.168.39.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21409-364627/.minikube/machines/addons-082251/id_rsa Username:docker}
	I1014 19:11:33.083858  369324 main.go:141] libmachine: (addons-082251) DBG | domain addons-082251 has defined MAC address 52:54:00:84:65:a3 in network mk-addons-082251
	I1014 19:11:33.084245  369324 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.45.0
	I1014 19:11:33.084742  369324 main.go:141] libmachine: (addons-082251) Calling .DriverName
	I1014 19:11:33.085102  369324 main.go:141] libmachine: (addons-082251) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:65:a3", ip: ""} in network mk-addons-082251: {Iface:virbr1 ExpiryTime:2025-10-14 20:11:06 +0000 UTC Type:0 Mac:52:54:00:84:65:a3 Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:addons-082251 Clientid:01:52:54:00:84:65:a3}
	I1014 19:11:33.085125  369324 main.go:141] libmachine: (addons-082251) DBG | domain addons-082251 has defined IP address 192.168.39.214 and MAC address 52:54:00:84:65:a3 in network mk-addons-082251
	I1014 19:11:33.085395  369324 addons.go:435] installing /etc/kubernetes/addons/ig-crd.yaml
	I1014 19:11:33.085415  369324 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (14 bytes)
	I1014 19:11:33.085434  369324 main.go:141] libmachine: (addons-082251) Calling .GetSSHHostname
	I1014 19:11:33.085500  369324 main.go:141] libmachine: (addons-082251) Calling .GetSSHPort
	I1014 19:11:33.085680  369324 main.go:141] libmachine: (addons-082251) Calling .GetSSHKeyPath
	I1014 19:11:33.085874  369324 main.go:141] libmachine: (addons-082251) Calling .GetSSHUsername
	I1014 19:11:33.086095  369324 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1014 19:11:33.086191  369324 sshutil.go:53] new ssh client: &{IP:192.168.39.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21409-364627/.minikube/machines/addons-082251/id_rsa Username:docker}
	I1014 19:11:33.087002  369324 main.go:141] libmachine: (addons-082251) DBG | domain addons-082251 has defined MAC address 52:54:00:84:65:a3 in network mk-addons-082251
	I1014 19:11:33.087547  369324 main.go:141] libmachine: (addons-082251) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:65:a3", ip: ""} in network mk-addons-082251: {Iface:virbr1 ExpiryTime:2025-10-14 20:11:06 +0000 UTC Type:0 Mac:52:54:00:84:65:a3 Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:addons-082251 Clientid:01:52:54:00:84:65:a3}
	I1014 19:11:33.087576  369324 main.go:141] libmachine: (addons-082251) DBG | domain addons-082251 has defined IP address 192.168.39.214 and MAC address 52:54:00:84:65:a3 in network mk-addons-082251
	I1014 19:11:33.087581  369324 main.go:141] libmachine: (addons-082251) DBG | domain addons-082251 has defined MAC address 52:54:00:84:65:a3 in network mk-addons-082251
	I1014 19:11:33.087620  369324 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1014 19:11:33.087641  369324 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1014 19:11:33.087663  369324 main.go:141] libmachine: (addons-082251) Calling .GetSSHHostname
	I1014 19:11:33.087884  369324 main.go:141] libmachine: (addons-082251) Calling .GetSSHPort
	I1014 19:11:33.088218  369324 main.go:141] libmachine: (addons-082251) Calling .GetSSHKeyPath
	I1014 19:11:33.088457  369324 main.go:141] libmachine: (addons-082251) Calling .GetSSHUsername
	I1014 19:11:33.088564  369324 main.go:141] libmachine: (addons-082251) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:65:a3", ip: ""} in network mk-addons-082251: {Iface:virbr1 ExpiryTime:2025-10-14 20:11:06 +0000 UTC Type:0 Mac:52:54:00:84:65:a3 Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:addons-082251 Clientid:01:52:54:00:84:65:a3}
	I1014 19:11:33.088741  369324 main.go:141] libmachine: (addons-082251) DBG | domain addons-082251 has defined IP address 192.168.39.214 and MAC address 52:54:00:84:65:a3 in network mk-addons-082251
	I1014 19:11:33.088778  369324 sshutil.go:53] new ssh client: &{IP:192.168.39.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21409-364627/.minikube/machines/addons-082251/id_rsa Username:docker}
	I1014 19:11:33.089023  369324 main.go:141] libmachine: (addons-082251) Calling .GetSSHPort
	I1014 19:11:33.089219  369324 main.go:141] libmachine: (addons-082251) Calling .GetSSHKeyPath
	I1014 19:11:33.089460  369324 main.go:141] libmachine: (addons-082251) Calling .GetSSHUsername
	I1014 19:11:33.089642  369324 sshutil.go:53] new ssh client: &{IP:192.168.39.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21409-364627/.minikube/machines/addons-082251/id_rsa Username:docker}
	I1014 19:11:33.090760  369324 main.go:141] libmachine: (addons-082251) DBG | domain addons-082251 has defined MAC address 52:54:00:84:65:a3 in network mk-addons-082251
	I1014 19:11:33.091277  369324 main.go:141] libmachine: (addons-082251) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:65:a3", ip: ""} in network mk-addons-082251: {Iface:virbr1 ExpiryTime:2025-10-14 20:11:06 +0000 UTC Type:0 Mac:52:54:00:84:65:a3 Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:addons-082251 Clientid:01:52:54:00:84:65:a3}
	I1014 19:11:33.091340  369324 main.go:141] libmachine: (addons-082251) DBG | domain addons-082251 has defined IP address 192.168.39.214 and MAC address 52:54:00:84:65:a3 in network mk-addons-082251
	I1014 19:11:33.091565  369324 main.go:141] libmachine: (addons-082251) Calling .GetSSHPort
	I1014 19:11:33.091745  369324 main.go:141] libmachine: (addons-082251) Calling .GetSSHKeyPath
	I1014 19:11:33.091897  369324 main.go:141] libmachine: (addons-082251) Calling .GetSSHUsername
	I1014 19:11:33.092050  369324 sshutil.go:53] new ssh client: &{IP:192.168.39.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21409-364627/.minikube/machines/addons-082251/id_rsa Username:docker}
	I1014 19:11:33.092405  369324 main.go:141] libmachine: (addons-082251) DBG | domain addons-082251 has defined MAC address 52:54:00:84:65:a3 in network mk-addons-082251
	I1014 19:11:33.092925  369324 main.go:141] libmachine: (addons-082251) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:65:a3", ip: ""} in network mk-addons-082251: {Iface:virbr1 ExpiryTime:2025-10-14 20:11:06 +0000 UTC Type:0 Mac:52:54:00:84:65:a3 Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:addons-082251 Clientid:01:52:54:00:84:65:a3}
	I1014 19:11:33.092958  369324 main.go:141] libmachine: (addons-082251) DBG | domain addons-082251 has defined IP address 192.168.39.214 and MAC address 52:54:00:84:65:a3 in network mk-addons-082251
	I1014 19:11:33.093041  369324 main.go:141] libmachine: (addons-082251) Calling .GetSSHPort
	I1014 19:11:33.093203  369324 main.go:141] libmachine: (addons-082251) Calling .GetSSHKeyPath
	I1014 19:11:33.093493  369324 main.go:141] libmachine: (addons-082251) Calling .GetSSHUsername
	I1014 19:11:33.093639  369324 sshutil.go:53] new ssh client: &{IP:192.168.39.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21409-364627/.minikube/machines/addons-082251/id_rsa Username:docker}
	I1014 19:11:33.099180  369324 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39087
	I1014 19:11:33.099613  369324 main.go:141] libmachine: () Calling .GetVersion
	I1014 19:11:33.100061  369324 main.go:141] libmachine: Using API Version  1
	I1014 19:11:33.100085  369324 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 19:11:33.100460  369324 main.go:141] libmachine: () Calling .GetMachineName
	I1014 19:11:33.100695  369324 main.go:141] libmachine: (addons-082251) Calling .GetState
	I1014 19:11:33.102815  369324 main.go:141] libmachine: (addons-082251) Calling .DriverName
	I1014 19:11:33.104754  369324 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1014 19:11:33.106178  369324 out.go:179]   - Using image docker.io/busybox:stable
	I1014 19:11:33.107325  369324 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1014 19:11:33.107345  369324 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1014 19:11:33.107363  369324 main.go:141] libmachine: (addons-082251) Calling .GetSSHHostname
	I1014 19:11:33.110894  369324 main.go:141] libmachine: (addons-082251) DBG | domain addons-082251 has defined MAC address 52:54:00:84:65:a3 in network mk-addons-082251
	I1014 19:11:33.111488  369324 main.go:141] libmachine: (addons-082251) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:65:a3", ip: ""} in network mk-addons-082251: {Iface:virbr1 ExpiryTime:2025-10-14 20:11:06 +0000 UTC Type:0 Mac:52:54:00:84:65:a3 Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:addons-082251 Clientid:01:52:54:00:84:65:a3}
	I1014 19:11:33.111531  369324 main.go:141] libmachine: (addons-082251) DBG | domain addons-082251 has defined IP address 192.168.39.214 and MAC address 52:54:00:84:65:a3 in network mk-addons-082251
	I1014 19:11:33.111733  369324 main.go:141] libmachine: (addons-082251) Calling .GetSSHPort
	I1014 19:11:33.111951  369324 main.go:141] libmachine: (addons-082251) Calling .GetSSHKeyPath
	I1014 19:11:33.112105  369324 main.go:141] libmachine: (addons-082251) Calling .GetSSHUsername
	I1014 19:11:33.112256  369324 sshutil.go:53] new ssh client: &{IP:192.168.39.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21409-364627/.minikube/machines/addons-082251/id_rsa Username:docker}
	W1014 19:11:33.511129  369324 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:48432->192.168.39.214:22: read: connection reset by peer
	I1014 19:11:33.511189  369324 retry.go:31] will retry after 222.687144ms: ssh: handshake failed: read tcp 192.168.39.1:48432->192.168.39.214:22: read: connection reset by peer
	I1014 19:11:33.611415  369324 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1014 19:11:33.611471  369324 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1014 19:11:33.806861  369324 addons.go:435] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1014 19:11:33.806908  369324 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1014 19:11:33.808747  369324 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1014 19:11:33.978419  369324 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1014 19:11:34.064628  369324 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1014 19:11:34.105718  369324 addons.go:435] installing /etc/kubernetes/addons/registry-svc.yaml
	I1014 19:11:34.105754  369324 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1014 19:11:34.139025  369324 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1014 19:11:34.156144  369324 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1014 19:11:34.181654  369324 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1014 19:11:34.206015  369324 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1014 19:11:34.206056  369324 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1014 19:11:34.246003  369324 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1014 19:11:34.259681  369324 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1014 19:11:34.259714  369324 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1014 19:11:34.326393  369324 addons.go:435] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1014 19:11:34.326432  369324 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1014 19:11:34.377461  369324 addons.go:435] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1014 19:11:34.377498  369324 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1014 19:11:34.417828  369324 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1014 19:11:34.428271  369324 addons.go:435] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1014 19:11:34.428297  369324 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1014 19:11:34.501363  369324 addons.go:435] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1014 19:11:34.501391  369324 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1014 19:11:34.523167  369324 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1014 19:11:34.523199  369324 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1014 19:11:34.642068  369324 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1014 19:11:34.642097  369324 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1014 19:11:34.644760  369324 addons.go:435] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1014 19:11:34.644794  369324 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1014 19:11:34.648570  369324 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1014 19:11:34.670462  369324 addons.go:435] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1014 19:11:34.670492  369324 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1014 19:11:34.794377  369324 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1014 19:11:34.855400  369324 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1014 19:11:34.908232  369324 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1014 19:11:34.908280  369324 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1014 19:11:34.957172  369324 addons.go:435] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1014 19:11:34.957209  369324 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1014 19:11:34.999552  369324 addons.go:435] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1014 19:11:34.999593  369324 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1014 19:11:34.999563  369324 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1014 19:11:34.999674  369324 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1014 19:11:35.215457  369324 addons.go:435] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1014 19:11:35.215494  369324 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1014 19:11:35.233145  369324 addons.go:435] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1014 19:11:35.233183  369324 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1014 19:11:35.312829  369324 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1014 19:11:35.375932  369324 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1014 19:11:35.423871  369324 addons.go:435] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1014 19:11:35.423904  369324 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1014 19:11:35.441999  369324 addons.go:435] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1014 19:11:35.442030  369324 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1014 19:11:35.840882  369324 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1014 19:11:35.905857  369324 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1014 19:11:35.905881  369324 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1014 19:11:36.382924  369324 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1014 19:11:36.382955  369324 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1014 19:11:36.506740  369324 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1014 19:11:36.506766  369324 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1014 19:11:36.762877  369324 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (3.151367316s)
	I1014 19:11:36.762915  369324 start.go:976] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1014 19:11:36.762966  369324 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (3.151474331s)
	I1014 19:11:36.763015  369324 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (2.954232838s)
	I1014 19:11:36.763051  369324 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (2.784589848s)
	I1014 19:11:36.763070  369324 main.go:141] libmachine: Making call to close driver server
	I1014 19:11:36.763080  369324 main.go:141] libmachine: Making call to close driver server
	I1014 19:11:36.763085  369324 main.go:141] libmachine: (addons-082251) Calling .Close
	I1014 19:11:36.763090  369324 main.go:141] libmachine: (addons-082251) Calling .Close
	I1014 19:11:36.763161  369324 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.698475467s)
	I1014 19:11:36.763209  369324 main.go:141] libmachine: Making call to close driver server
	I1014 19:11:36.763224  369324 main.go:141] libmachine: (addons-082251) Calling .Close
	I1014 19:11:36.763960  369324 node_ready.go:35] waiting up to 6m0s for node "addons-082251" to be "Ready" ...
	I1014 19:11:36.764172  369324 main.go:141] libmachine: (addons-082251) DBG | Closing plugin on server side
	I1014 19:11:36.764181  369324 main.go:141] libmachine: (addons-082251) DBG | Closing plugin on server side
	I1014 19:11:36.764177  369324 main.go:141] libmachine: (addons-082251) DBG | Closing plugin on server side
	I1014 19:11:36.764199  369324 main.go:141] libmachine: Successfully made call to close driver server
	I1014 19:11:36.764214  369324 main.go:141] libmachine: Successfully made call to close driver server
	I1014 19:11:36.764215  369324 main.go:141] libmachine: Making call to close connection to plugin binary
	I1014 19:11:36.764222  369324 main.go:141] libmachine: Making call to close connection to plugin binary
	I1014 19:11:36.764229  369324 main.go:141] libmachine: Making call to close driver server
	I1014 19:11:36.764232  369324 main.go:141] libmachine: Making call to close driver server
	I1014 19:11:36.764238  369324 main.go:141] libmachine: (addons-082251) Calling .Close
	I1014 19:11:36.764241  369324 main.go:141] libmachine: (addons-082251) Calling .Close
	I1014 19:11:36.764369  369324 main.go:141] libmachine: Successfully made call to close driver server
	I1014 19:11:36.764412  369324 main.go:141] libmachine: Making call to close connection to plugin binary
	I1014 19:11:36.764440  369324 main.go:141] libmachine: Making call to close driver server
	I1014 19:11:36.764453  369324 main.go:141] libmachine: (addons-082251) Calling .Close
	I1014 19:11:36.764594  369324 main.go:141] libmachine: (addons-082251) DBG | Closing plugin on server side
	I1014 19:11:36.764602  369324 main.go:141] libmachine: Successfully made call to close driver server
	I1014 19:11:36.764613  369324 main.go:141] libmachine: Making call to close connection to plugin binary
	I1014 19:11:36.764619  369324 main.go:141] libmachine: (addons-082251) DBG | Closing plugin on server side
	I1014 19:11:36.764635  369324 main.go:141] libmachine: Successfully made call to close driver server
	I1014 19:11:36.764645  369324 main.go:141] libmachine: Making call to close connection to plugin binary
	I1014 19:11:36.764770  369324 main.go:141] libmachine: (addons-082251) DBG | Closing plugin on server side
	I1014 19:11:36.765098  369324 main.go:141] libmachine: Successfully made call to close driver server
	I1014 19:11:36.765120  369324 main.go:141] libmachine: Making call to close connection to plugin binary
	I1014 19:11:36.776130  369324 node_ready.go:49] node "addons-082251" is "Ready"
	I1014 19:11:36.776177  369324 node_ready.go:38] duration metric: took 12.188175ms for node "addons-082251" to be "Ready" ...
	I1014 19:11:36.776202  369324 api_server.go:52] waiting for apiserver process to appear ...
	I1014 19:11:36.776352  369324 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:11:36.784237  369324 main.go:141] libmachine: Making call to close driver server
	I1014 19:11:36.784265  369324 main.go:141] libmachine: (addons-082251) Calling .Close
	I1014 19:11:36.784573  369324 main.go:141] libmachine: (addons-082251) DBG | Closing plugin on server side
	I1014 19:11:36.784608  369324 main.go:141] libmachine: Successfully made call to close driver server
	I1014 19:11:36.784623  369324 main.go:141] libmachine: Making call to close connection to plugin binary
	I1014 19:11:36.945907  369324 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1014 19:11:36.945931  369324 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1014 19:11:37.215219  369324 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1014 19:11:37.215248  369324 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1014 19:11:37.336910  369324 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-082251" context rescaled to 1 replicas
	I1014 19:11:37.592709  369324 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1014 19:11:38.732899  369324 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.593819127s)
	I1014 19:11:38.732979  369324 main.go:141] libmachine: Making call to close driver server
	I1014 19:11:38.732994  369324 main.go:141] libmachine: (addons-082251) Calling .Close
	I1014 19:11:38.732987  369324 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (4.576798381s)
	I1014 19:11:38.733035  369324 main.go:141] libmachine: Making call to close driver server
	I1014 19:11:38.733056  369324 main.go:141] libmachine: (addons-082251) Calling .Close
	I1014 19:11:38.733035  369324 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (4.551340382s)
	I1014 19:11:38.733119  369324 main.go:141] libmachine: Making call to close driver server
	I1014 19:11:38.733131  369324 main.go:141] libmachine: (addons-082251) Calling .Close
	I1014 19:11:38.733080  369324 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (4.487045186s)
	I1014 19:11:38.733194  369324 main.go:141] libmachine: Making call to close driver server
	I1014 19:11:38.733202  369324 main.go:141] libmachine: (addons-082251) Calling .Close
	I1014 19:11:38.733372  369324 main.go:141] libmachine: Successfully made call to close driver server
	I1014 19:11:38.733385  369324 main.go:141] libmachine: (addons-082251) DBG | Closing plugin on server side
	I1014 19:11:38.733391  369324 main.go:141] libmachine: Making call to close connection to plugin binary
	I1014 19:11:38.733401  369324 main.go:141] libmachine: Making call to close driver server
	I1014 19:11:38.733407  369324 main.go:141] libmachine: (addons-082251) DBG | Closing plugin on server side
	I1014 19:11:38.733409  369324 main.go:141] libmachine: (addons-082251) Calling .Close
	I1014 19:11:38.733415  369324 main.go:141] libmachine: (addons-082251) DBG | Closing plugin on server side
	I1014 19:11:38.733452  369324 main.go:141] libmachine: Successfully made call to close driver server
	I1014 19:11:38.733460  369324 main.go:141] libmachine: Making call to close connection to plugin binary
	I1014 19:11:38.733467  369324 main.go:141] libmachine: Making call to close driver server
	I1014 19:11:38.733473  369324 main.go:141] libmachine: (addons-082251) Calling .Close
	I1014 19:11:38.733506  369324 main.go:141] libmachine: Successfully made call to close driver server
	I1014 19:11:38.733522  369324 main.go:141] libmachine: Making call to close connection to plugin binary
	I1014 19:11:38.733530  369324 main.go:141] libmachine: Making call to close driver server
	I1014 19:11:38.733537  369324 main.go:141] libmachine: (addons-082251) Calling .Close
	I1014 19:11:38.733756  369324 main.go:141] libmachine: (addons-082251) DBG | Closing plugin on server side
	I1014 19:11:38.733764  369324 main.go:141] libmachine: Successfully made call to close driver server
	I1014 19:11:38.733778  369324 main.go:141] libmachine: Making call to close connection to plugin binary
	I1014 19:11:38.733785  369324 main.go:141] libmachine: (addons-082251) DBG | Closing plugin on server side
	I1014 19:11:38.733800  369324 main.go:141] libmachine: Successfully made call to close driver server
	I1014 19:11:38.733815  369324 main.go:141] libmachine: Making call to close connection to plugin binary
	I1014 19:11:38.734042  369324 main.go:141] libmachine: Successfully made call to close driver server
	I1014 19:11:38.734066  369324 main.go:141] libmachine: Making call to close connection to plugin binary
	I1014 19:11:38.734272  369324 main.go:141] libmachine: Successfully made call to close driver server
	I1014 19:11:38.734290  369324 main.go:141] libmachine: Making call to close connection to plugin binary
	I1014 19:11:38.734296  369324 main.go:141] libmachine: Making call to close driver server
	I1014 19:11:38.734303  369324 main.go:141] libmachine: (addons-082251) Calling .Close
	I1014 19:11:38.734305  369324 main.go:141] libmachine: (addons-082251) DBG | Closing plugin on server side
	I1014 19:11:38.736872  369324 main.go:141] libmachine: (addons-082251) DBG | Closing plugin on server side
	I1014 19:11:38.736882  369324 main.go:141] libmachine: Successfully made call to close driver server
	I1014 19:11:38.736906  369324 main.go:141] libmachine: Making call to close connection to plugin binary
	I1014 19:11:40.498143  369324 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1014 19:11:40.498186  369324 main.go:141] libmachine: (addons-082251) Calling .GetSSHHostname
	I1014 19:11:40.501939  369324 main.go:141] libmachine: (addons-082251) DBG | domain addons-082251 has defined MAC address 52:54:00:84:65:a3 in network mk-addons-082251
	I1014 19:11:40.502447  369324 main.go:141] libmachine: (addons-082251) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:65:a3", ip: ""} in network mk-addons-082251: {Iface:virbr1 ExpiryTime:2025-10-14 20:11:06 +0000 UTC Type:0 Mac:52:54:00:84:65:a3 Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:addons-082251 Clientid:01:52:54:00:84:65:a3}
	I1014 19:11:40.502483  369324 main.go:141] libmachine: (addons-082251) DBG | domain addons-082251 has defined IP address 192.168.39.214 and MAC address 52:54:00:84:65:a3 in network mk-addons-082251
	I1014 19:11:40.502805  369324 main.go:141] libmachine: (addons-082251) Calling .GetSSHPort
	I1014 19:11:40.503074  369324 main.go:141] libmachine: (addons-082251) Calling .GetSSHKeyPath
	I1014 19:11:40.503307  369324 main.go:141] libmachine: (addons-082251) Calling .GetSSHUsername
	I1014 19:11:40.503532  369324 sshutil.go:53] new ssh client: &{IP:192.168.39.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21409-364627/.minikube/machines/addons-082251/id_rsa Username:docker}
	I1014 19:11:40.733166  369324 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1014 19:11:40.824778  369324 addons.go:238] Setting addon gcp-auth=true in "addons-082251"
	I1014 19:11:40.824853  369324 host.go:66] Checking if "addons-082251" exists ...
	I1014 19:11:40.825181  369324 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 19:11:40.825240  369324 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 19:11:40.840751  369324 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37685
	I1014 19:11:40.841250  369324 main.go:141] libmachine: () Calling .GetVersion
	I1014 19:11:40.841789  369324 main.go:141] libmachine: Using API Version  1
	I1014 19:11:40.841811  369324 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 19:11:40.842306  369324 main.go:141] libmachine: () Calling .GetMachineName
	I1014 19:11:40.842967  369324 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 19:11:40.843020  369324 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 19:11:40.857513  369324 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41963
	I1014 19:11:40.857940  369324 main.go:141] libmachine: () Calling .GetVersion
	I1014 19:11:40.858464  369324 main.go:141] libmachine: Using API Version  1
	I1014 19:11:40.858494  369324 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 19:11:40.858911  369324 main.go:141] libmachine: () Calling .GetMachineName
	I1014 19:11:40.859213  369324 main.go:141] libmachine: (addons-082251) Calling .GetState
	I1014 19:11:40.861399  369324 main.go:141] libmachine: (addons-082251) Calling .DriverName
	I1014 19:11:40.861677  369324 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1014 19:11:40.861709  369324 main.go:141] libmachine: (addons-082251) Calling .GetSSHHostname
	I1014 19:11:40.864986  369324 main.go:141] libmachine: (addons-082251) DBG | domain addons-082251 has defined MAC address 52:54:00:84:65:a3 in network mk-addons-082251
	I1014 19:11:40.865491  369324 main.go:141] libmachine: (addons-082251) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:65:a3", ip: ""} in network mk-addons-082251: {Iface:virbr1 ExpiryTime:2025-10-14 20:11:06 +0000 UTC Type:0 Mac:52:54:00:84:65:a3 Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:addons-082251 Clientid:01:52:54:00:84:65:a3}
	I1014 19:11:40.865517  369324 main.go:141] libmachine: (addons-082251) DBG | domain addons-082251 has defined IP address 192.168.39.214 and MAC address 52:54:00:84:65:a3 in network mk-addons-082251
	I1014 19:11:40.865743  369324 main.go:141] libmachine: (addons-082251) Calling .GetSSHPort
	I1014 19:11:40.865919  369324 main.go:141] libmachine: (addons-082251) Calling .GetSSHKeyPath
	I1014 19:11:40.866065  369324 main.go:141] libmachine: (addons-082251) Calling .GetSSHUsername
	I1014 19:11:40.866197  369324 sshutil.go:53] new ssh client: &{IP:192.168.39.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21409-364627/.minikube/machines/addons-082251/id_rsa Username:docker}
	I1014 19:11:42.113755  369324 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (7.695891255s)
	I1014 19:11:42.113817  369324 main.go:141] libmachine: Making call to close driver server
	I1014 19:11:42.113831  369324 main.go:141] libmachine: (addons-082251) Calling .Close
	I1014 19:11:42.113884  369324 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (7.465273636s)
	I1014 19:11:42.113938  369324 main.go:141] libmachine: Making call to close driver server
	I1014 19:11:42.113954  369324 main.go:141] libmachine: (addons-082251) Calling .Close
	I1014 19:11:42.113952  369324 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (7.319530727s)
	I1014 19:11:42.114042  369324 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (7.258613583s)
	I1014 19:11:42.114060  369324 main.go:141] libmachine: Making call to close driver server
	W1014 19:11:42.114069  369324 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 19:11:42.114076  369324 main.go:141] libmachine: (addons-082251) Calling .Close
	I1014 19:11:42.114089  369324 retry.go:31] will retry after 339.009945ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 19:11:42.114115  369324 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (6.801245857s)
	I1014 19:11:42.114141  369324 main.go:141] libmachine: Making call to close driver server
	I1014 19:11:42.114146  369324 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (6.738178306s)
	I1014 19:11:42.114162  369324 main.go:141] libmachine: Making call to close driver server
	I1014 19:11:42.114171  369324 main.go:141] libmachine: (addons-082251) Calling .Close
	I1014 19:11:42.114176  369324 main.go:141] libmachine: Successfully made call to close driver server
	I1014 19:11:42.114187  369324 main.go:141] libmachine: Successfully made call to close driver server
	I1014 19:11:42.114189  369324 main.go:141] libmachine: Making call to close connection to plugin binary
	I1014 19:11:42.114195  369324 main.go:141] libmachine: Making call to close connection to plugin binary
	I1014 19:11:42.114203  369324 main.go:141] libmachine: Making call to close driver server
	I1014 19:11:42.114205  369324 main.go:141] libmachine: Making call to close driver server
	I1014 19:11:42.114213  369324 main.go:141] libmachine: (addons-082251) Calling .Close
	I1014 19:11:42.114214  369324 main.go:141] libmachine: (addons-082251) Calling .Close
	I1014 19:11:42.114704  369324 main.go:141] libmachine: Successfully made call to close driver server
	I1014 19:11:42.114737  369324 main.go:141] libmachine: Making call to close connection to plugin binary
	I1014 19:11:42.114300  369324 main.go:141] libmachine: (addons-082251) Calling .Close
	I1014 19:11:42.114756  369324 addons.go:479] Verifying addon ingress=true in "addons-082251"
	I1014 19:11:42.114352  369324 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (6.273402687s)
	W1014 19:11:42.114819  369324 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1014 19:11:42.114837  369324 retry.go:31] will retry after 361.346183ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1014 19:11:42.115110  369324 main.go:141] libmachine: (addons-082251) DBG | Closing plugin on server side
	I1014 19:11:42.115137  369324 main.go:141] libmachine: Successfully made call to close driver server
	I1014 19:11:42.115144  369324 main.go:141] libmachine: Making call to close connection to plugin binary
	I1014 19:11:42.115152  369324 main.go:141] libmachine: Making call to close driver server
	I1014 19:11:42.115160  369324 main.go:141] libmachine: (addons-082251) Calling .Close
	I1014 19:11:42.114365  369324 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (5.337989676s)
	I1014 19:11:42.115238  369324 api_server.go:72] duration metric: took 9.195888944s to wait for apiserver process to appear ...
	I1014 19:11:42.115247  369324 api_server.go:88] waiting for apiserver healthz status ...
	I1014 19:11:42.115264  369324 api_server.go:253] Checking apiserver healthz at https://192.168.39.214:8443/healthz ...
	I1014 19:11:42.114429  369324 main.go:141] libmachine: (addons-082251) DBG | Closing plugin on server side
	I1014 19:11:42.114464  369324 main.go:141] libmachine: (addons-082251) DBG | Closing plugin on server side
	I1014 19:11:42.114471  369324 main.go:141] libmachine: Successfully made call to close driver server
	I1014 19:11:42.115742  369324 main.go:141] libmachine: Making call to close connection to plugin binary
	I1014 19:11:42.115755  369324 main.go:141] libmachine: Making call to close driver server
	I1014 19:11:42.115764  369324 main.go:141] libmachine: (addons-082251) Calling .Close
	I1014 19:11:42.115984  369324 main.go:141] libmachine: (addons-082251) DBG | Closing plugin on server side
	I1014 19:11:42.116022  369324 main.go:141] libmachine: Successfully made call to close driver server
	I1014 19:11:42.116029  369324 main.go:141] libmachine: Making call to close connection to plugin binary
	I1014 19:11:42.116037  369324 addons.go:479] Verifying addon registry=true in "addons-082251"
	I1014 19:11:42.116328  369324 main.go:141] libmachine: (addons-082251) DBG | Closing plugin on server side
	I1014 19:11:42.116365  369324 main.go:141] libmachine: (addons-082251) DBG | Closing plugin on server side
	I1014 19:11:42.116405  369324 main.go:141] libmachine: Successfully made call to close driver server
	I1014 19:11:42.116416  369324 main.go:141] libmachine: Making call to close connection to plugin binary
	I1014 19:11:42.116932  369324 main.go:141] libmachine: Successfully made call to close driver server
	I1014 19:11:42.116964  369324 main.go:141] libmachine: Making call to close connection to plugin binary
	I1014 19:11:42.116982  369324 addons.go:479] Verifying addon metrics-server=true in "addons-082251"
	I1014 19:11:42.114386  369324 main.go:141] libmachine: Successfully made call to close driver server
	I1014 19:11:42.117040  369324 main.go:141] libmachine: Making call to close connection to plugin binary
	I1014 19:11:42.117052  369324 main.go:141] libmachine: Making call to close driver server
	I1014 19:11:42.117062  369324 main.go:141] libmachine: (addons-082251) Calling .Close
	I1014 19:11:42.117371  369324 main.go:141] libmachine: Successfully made call to close driver server
	I1014 19:11:42.117387  369324 main.go:141] libmachine: Making call to close connection to plugin binary
	I1014 19:11:42.119497  369324 out.go:179] * Verifying ingress addon...
	I1014 19:11:42.119503  369324 out.go:179] * Verifying registry addon...
	I1014 19:11:42.120371  369324 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-082251 service yakd-dashboard -n yakd-dashboard
	
	I1014 19:11:42.121820  369324 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1014 19:11:42.121820  369324 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1014 19:11:42.129814  369324 api_server.go:279] https://192.168.39.214:8443/healthz returned 200:
	ok
	I1014 19:11:42.130861  369324 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1014 19:11:42.130882  369324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 19:11:42.131142  369324 api_server.go:141] control plane version: v1.34.1
	I1014 19:11:42.131177  369324 api_server.go:131] duration metric: took 15.920719ms to wait for apiserver health ...
	I1014 19:11:42.131191  369324 system_pods.go:43] waiting for kube-system pods to appear ...
	I1014 19:11:42.139708  369324 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1014 19:11:42.139737  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 19:11:42.148465  369324 system_pods.go:59] 17 kube-system pods found
	I1014 19:11:42.148510  369324 system_pods.go:61] "amd-gpu-device-plugin-wjxgm" [3866b9b9-cfa5-423e-aadf-3969d88023ec] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1014 19:11:42.148521  369324 system_pods.go:61] "coredns-66bc5c9577-bkk9j" [e1568beb-1e2a-4022-85d5-2d7b7674dd78] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1014 19:11:42.148537  369324 system_pods.go:61] "coredns-66bc5c9577-rpkbj" [c31ec4cb-a1a3-45d4-bb9b-5f6ea1abac04] Running
	I1014 19:11:42.148543  369324 system_pods.go:61] "etcd-addons-082251" [a2da4743-9229-44d8-8080-a59ed3d6bb1d] Running
	I1014 19:11:42.148548  369324 system_pods.go:61] "kube-apiserver-addons-082251" [157c1450-49d3-4d72-822c-a10dcdc73e41] Running
	I1014 19:11:42.148565  369324 system_pods.go:61] "kube-controller-manager-addons-082251" [49a48842-87ad-4427-9232-bb33c2da7c94] Running
	I1014 19:11:42.148574  369324 system_pods.go:61] "kube-ingress-dns-minikube" [078e3d8d-9557-476e-bdb3-72041038eef4] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1014 19:11:42.148580  369324 system_pods.go:61] "kube-proxy-rl7gc" [52a7f838-8ad0-4fa9-8bb4-a0b1d45eb94c] Running
	I1014 19:11:42.148589  369324 system_pods.go:61] "kube-scheduler-addons-082251" [06ca7094-b4c5-4688-a9d8-6a99acd41760] Running
	I1014 19:11:42.148597  369324 system_pods.go:61] "metrics-server-85b7d694d7-8pqv5" [ca2ee05c-08b9-4d0e-b306-0fc54ab16eb0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1014 19:11:42.148614  369324 system_pods.go:61] "nvidia-device-plugin-daemonset-r6zsz" [dab834be-d432-4ec8-bbba-8cdbd68df25c] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1014 19:11:42.148624  369324 system_pods.go:61] "registry-6b586f9694-wwf86" [ad5c8d48-73fd-4a58-bb4a-7aa0b51956fe] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1014 19:11:42.148634  369324 system_pods.go:61] "registry-creds-764b6fb674-47xvv" [59883ef0-b775-4225-b353-d5c88d5afebd] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1014 19:11:42.148643  369324 system_pods.go:61] "registry-proxy-xqw5q" [b201e8db-0b5b-4101-8b24-9c1cf511c81b] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1014 19:11:42.148655  369324 system_pods.go:61] "snapshot-controller-7d9fbc56b8-klmrt" [0e4f3aa7-1839-40f5-bdd2-585b2a57d164] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1014 19:11:42.148664  369324 system_pods.go:61] "snapshot-controller-7d9fbc56b8-zwlvs" [4a0d770e-66b2-4b5e-9b51-712ab2f4f96b] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1014 19:11:42.148678  369324 system_pods.go:61] "storage-provisioner" [1426d20a-4d3a-4473-b6b5-e213b9eb7c6d] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1014 19:11:42.148690  369324 system_pods.go:74] duration metric: took 17.483577ms to wait for pod list to return data ...
	I1014 19:11:42.148705  369324 default_sa.go:34] waiting for default service account to be created ...
	I1014 19:11:42.162324  369324 default_sa.go:45] found service account: "default"
	I1014 19:11:42.162354  369324 default_sa.go:55] duration metric: took 13.638033ms for default service account to be created ...
	I1014 19:11:42.162368  369324 system_pods.go:116] waiting for k8s-apps to be running ...
	I1014 19:11:42.179559  369324 main.go:141] libmachine: Making call to close driver server
	I1014 19:11:42.179586  369324 main.go:141] libmachine: (addons-082251) Calling .Close
	I1014 19:11:42.179906  369324 main.go:141] libmachine: (addons-082251) DBG | Closing plugin on server side
	I1014 19:11:42.179949  369324 main.go:141] libmachine: Successfully made call to close driver server
	I1014 19:11:42.179958  369324 main.go:141] libmachine: Making call to close connection to plugin binary
	I1014 19:11:42.235872  369324 system_pods.go:86] 17 kube-system pods found
	I1014 19:11:42.235913  369324 system_pods.go:89] "amd-gpu-device-plugin-wjxgm" [3866b9b9-cfa5-423e-aadf-3969d88023ec] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1014 19:11:42.235924  369324 system_pods.go:89] "coredns-66bc5c9577-bkk9j" [e1568beb-1e2a-4022-85d5-2d7b7674dd78] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1014 19:11:42.235935  369324 system_pods.go:89] "coredns-66bc5c9577-rpkbj" [c31ec4cb-a1a3-45d4-bb9b-5f6ea1abac04] Running
	I1014 19:11:42.235940  369324 system_pods.go:89] "etcd-addons-082251" [a2da4743-9229-44d8-8080-a59ed3d6bb1d] Running
	I1014 19:11:42.235945  369324 system_pods.go:89] "kube-apiserver-addons-082251" [157c1450-49d3-4d72-822c-a10dcdc73e41] Running
	I1014 19:11:42.235952  369324 system_pods.go:89] "kube-controller-manager-addons-082251" [49a48842-87ad-4427-9232-bb33c2da7c94] Running
	I1014 19:11:42.235960  369324 system_pods.go:89] "kube-ingress-dns-minikube" [078e3d8d-9557-476e-bdb3-72041038eef4] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1014 19:11:42.235970  369324 system_pods.go:89] "kube-proxy-rl7gc" [52a7f838-8ad0-4fa9-8bb4-a0b1d45eb94c] Running
	I1014 19:11:42.235977  369324 system_pods.go:89] "kube-scheduler-addons-082251" [06ca7094-b4c5-4688-a9d8-6a99acd41760] Running
	I1014 19:11:42.235986  369324 system_pods.go:89] "metrics-server-85b7d694d7-8pqv5" [ca2ee05c-08b9-4d0e-b306-0fc54ab16eb0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1014 19:11:42.235997  369324 system_pods.go:89] "nvidia-device-plugin-daemonset-r6zsz" [dab834be-d432-4ec8-bbba-8cdbd68df25c] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1014 19:11:42.236007  369324 system_pods.go:89] "registry-6b586f9694-wwf86" [ad5c8d48-73fd-4a58-bb4a-7aa0b51956fe] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1014 19:11:42.236015  369324 system_pods.go:89] "registry-creds-764b6fb674-47xvv" [59883ef0-b775-4225-b353-d5c88d5afebd] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1014 19:11:42.236033  369324 system_pods.go:89] "registry-proxy-xqw5q" [b201e8db-0b5b-4101-8b24-9c1cf511c81b] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1014 19:11:42.236046  369324 system_pods.go:89] "snapshot-controller-7d9fbc56b8-klmrt" [0e4f3aa7-1839-40f5-bdd2-585b2a57d164] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1014 19:11:42.236061  369324 system_pods.go:89] "snapshot-controller-7d9fbc56b8-zwlvs" [4a0d770e-66b2-4b5e-9b51-712ab2f4f96b] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1014 19:11:42.236070  369324 system_pods.go:89] "storage-provisioner" [1426d20a-4d3a-4473-b6b5-e213b9eb7c6d] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1014 19:11:42.236082  369324 system_pods.go:126] duration metric: took 73.706126ms to wait for k8s-apps to be running ...
	I1014 19:11:42.236098  369324 system_svc.go:44] waiting for kubelet service to be running ....
	I1014 19:11:42.236150  369324 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1014 19:11:42.453581  369324 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1014 19:11:42.477306  369324 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1014 19:11:42.641264  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 19:11:42.643921  369324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 19:11:42.731952  369324 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (5.139179131s)
	I1014 19:11:42.732024  369324 main.go:141] libmachine: Making call to close driver server
	I1014 19:11:42.732036  369324 main.go:141] libmachine: (addons-082251) Calling .Close
	I1014 19:11:42.732062  369324 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (1.870348445s)
	I1014 19:11:42.732121  369324 system_svc.go:56] duration metric: took 496.021757ms WaitForService to wait for kubelet
	I1014 19:11:42.732137  369324 kubeadm.go:586] duration metric: took 9.812787332s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1014 19:11:42.732159  369324 node_conditions.go:102] verifying NodePressure condition ...
	I1014 19:11:42.732348  369324 main.go:141] libmachine: Successfully made call to close driver server
	I1014 19:11:42.732365  369324 main.go:141] libmachine: Making call to close connection to plugin binary
	I1014 19:11:42.732389  369324 main.go:141] libmachine: (addons-082251) DBG | Closing plugin on server side
	I1014 19:11:42.732426  369324 main.go:141] libmachine: Making call to close driver server
	I1014 19:11:42.732437  369324 main.go:141] libmachine: (addons-082251) Calling .Close
	I1014 19:11:42.732703  369324 main.go:141] libmachine: (addons-082251) DBG | Closing plugin on server side
	I1014 19:11:42.732742  369324 main.go:141] libmachine: Successfully made call to close driver server
	I1014 19:11:42.732769  369324 main.go:141] libmachine: Making call to close connection to plugin binary
	I1014 19:11:42.732785  369324 addons.go:479] Verifying addon csi-hostpath-driver=true in "addons-082251"
	I1014 19:11:42.733825  369324 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1014 19:11:42.734809  369324 out.go:179] * Verifying csi-hostpath-driver addon...
	I1014 19:11:42.736260  369324 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1014 19:11:42.736993  369324 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1014 19:11:42.737112  369324 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1014 19:11:42.737135  369324 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1014 19:11:42.761871  369324 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1014 19:11:42.761913  369324 node_conditions.go:123] node cpu capacity is 2
	I1014 19:11:42.761932  369324 node_conditions.go:105] duration metric: took 29.767821ms to run NodePressure ...
	I1014 19:11:42.761944  369324 start.go:241] waiting for startup goroutines ...
	I1014 19:11:42.773194  369324 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1014 19:11:42.773219  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 19:11:42.865717  369324 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1014 19:11:42.865752  369324 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1014 19:11:42.936631  369324 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1014 19:11:42.936654  369324 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1014 19:11:43.055606  369324 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1014 19:11:43.128797  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 19:11:43.131895  369324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 19:11:43.244730  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 19:11:43.626794  369324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 19:11:43.628669  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 19:11:43.771525  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 19:11:44.138431  369324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 19:11:44.138516  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 19:11:44.244799  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 19:11:44.672477  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 19:11:44.672600  369324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 19:11:44.797444  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 19:11:45.142351  369324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 19:11:45.143507  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 19:11:45.245077  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 19:11:45.448386  369324 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.970995644s)
	I1014 19:11:45.448456  369324 main.go:141] libmachine: Making call to close driver server
	I1014 19:11:45.448475  369324 main.go:141] libmachine: (addons-082251) Calling .Close
	I1014 19:11:45.448569  369324 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (2.392911066s)
	I1014 19:11:45.448613  369324 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (2.994989439s)
	I1014 19:11:45.448631  369324 main.go:141] libmachine: Making call to close driver server
	I1014 19:11:45.448647  369324 main.go:141] libmachine: (addons-082251) Calling .Close
	W1014 19:11:45.448647  369324 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 19:11:45.448672  369324 retry.go:31] will retry after 251.40431ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 19:11:45.448833  369324 main.go:141] libmachine: Successfully made call to close driver server
	I1014 19:11:45.448890  369324 main.go:141] libmachine: (addons-082251) DBG | Closing plugin on server side
	I1014 19:11:45.448909  369324 main.go:141] libmachine: Making call to close connection to plugin binary
	I1014 19:11:45.448918  369324 main.go:141] libmachine: Making call to close driver server
	I1014 19:11:45.448930  369324 main.go:141] libmachine: (addons-082251) Calling .Close
	I1014 19:11:45.448932  369324 main.go:141] libmachine: Successfully made call to close driver server
	I1014 19:11:45.448948  369324 main.go:141] libmachine: Making call to close connection to plugin binary
	I1014 19:11:45.448958  369324 main.go:141] libmachine: Making call to close driver server
	I1014 19:11:45.448969  369324 main.go:141] libmachine: (addons-082251) Calling .Close
	I1014 19:11:45.449178  369324 main.go:141] libmachine: Successfully made call to close driver server
	I1014 19:11:45.449257  369324 main.go:141] libmachine: (addons-082251) DBG | Closing plugin on server side
	I1014 19:11:45.449263  369324 main.go:141] libmachine: Making call to close connection to plugin binary
	I1014 19:11:45.449215  369324 main.go:141] libmachine: (addons-082251) DBG | Closing plugin on server side
	I1014 19:11:45.449290  369324 main.go:141] libmachine: Successfully made call to close driver server
	I1014 19:11:45.449338  369324 main.go:141] libmachine: Making call to close connection to plugin binary
	I1014 19:11:45.450432  369324 addons.go:479] Verifying addon gcp-auth=true in "addons-082251"
	I1014 19:11:45.452210  369324 out.go:179] * Verifying gcp-auth addon...
	I1014 19:11:45.454585  369324 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1014 19:11:45.463699  369324 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1014 19:11:45.463730  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 19:11:45.626857  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 19:11:45.628238  369324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 19:11:45.700851  369324 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1014 19:11:45.741286  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 19:11:45.959774  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 19:11:46.132206  369324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 19:11:46.133562  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 19:11:46.243480  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 19:11:46.461725  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 19:11:46.628449  369324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 19:11:46.628612  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 19:11:46.745420  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 19:11:46.962752  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 19:11:47.133639  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 19:11:47.133773  369324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 19:11:47.169251  369324 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.468346693s)
	W1014 19:11:47.169327  369324 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 19:11:47.169369  369324 retry.go:31] will retry after 728.044977ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 19:11:47.242644  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 19:11:47.457651  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 19:11:47.627834  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 19:11:47.628872  369324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 19:11:47.745718  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 19:11:47.897652  369324 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1014 19:11:47.961154  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 19:11:48.132396  369324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 19:11:48.135284  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 19:11:48.242467  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 19:11:48.457772  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 19:11:48.628611  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 19:11:48.628698  369324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 19:11:48.745459  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1014 19:11:48.847679  369324 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 19:11:48.847743  369324 retry.go:31] will retry after 711.399589ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 19:11:48.958397  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 19:11:49.129856  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 19:11:49.131361  369324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 19:11:49.240783  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 19:11:49.461657  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 19:11:49.559800  369324 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1014 19:11:49.626127  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 19:11:49.626385  369324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 19:11:49.740589  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 19:11:49.961303  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 19:11:50.129420  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 19:11:50.130998  369324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 19:11:50.242923  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 19:11:50.461018  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 19:11:50.625577  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 19:11:50.625756  369324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 19:11:50.651294  369324 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.091445185s)
	W1014 19:11:50.651361  369324 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 19:11:50.651393  369324 retry.go:31] will retry after 800.521084ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 19:11:50.742599  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 19:11:50.959975  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 19:11:51.128198  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 19:11:51.128229  369324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 19:11:51.242577  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 19:11:51.452793  369324 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1014 19:11:51.459566  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 19:11:51.631137  369324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 19:11:51.631252  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 19:11:51.744730  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 19:11:51.959767  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 19:11:52.129083  369324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 19:11:52.129087  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 19:11:52.247075  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 19:11:52.459607  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 19:11:52.881222  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 19:11:52.881352  369324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 19:11:52.885197  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 19:11:52.888329  369324 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.435461484s)
	W1014 19:11:52.888376  369324 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 19:11:52.888414  369324 retry.go:31] will retry after 1.069021634s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 19:11:52.959822  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 19:11:53.125769  369324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 19:11:53.125843  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 19:11:53.242899  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 19:11:53.459029  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 19:11:53.627122  369324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 19:11:53.627491  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 19:11:53.741235  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 19:11:53.958551  369324 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1014 19:11:53.959805  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 19:11:54.127868  369324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 19:11:54.134037  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 19:11:54.243324  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 19:11:54.509690  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 19:11:54.630128  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 19:11:54.630342  369324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 19:11:54.741673  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 19:11:54.958775  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 19:11:55.057920  369324 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.09932404s)
	W1014 19:11:55.057986  369324 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 19:11:55.058012  369324 retry.go:31] will retry after 4.177187403s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 19:11:55.126069  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 19:11:55.126729  369324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 19:11:55.241343  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 19:11:55.463802  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 19:11:55.626326  369324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 19:11:55.627933  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 19:11:55.744274  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 19:11:55.958374  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 19:11:56.133795  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 19:11:56.134081  369324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 19:11:56.500126  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 19:11:56.500742  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 19:11:56.628160  369324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 19:11:56.628244  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 19:11:56.742643  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 19:11:56.958712  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 19:11:57.126789  369324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 19:11:57.126810  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 19:11:57.241376  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 19:11:57.458588  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 19:11:57.626577  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 19:11:57.626769  369324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 19:11:57.741332  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 19:11:57.961398  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 19:11:58.130502  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 19:11:58.132248  369324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 19:11:58.241488  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 19:11:58.458308  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 19:11:58.627263  369324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 19:11:58.627406  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 19:11:58.742750  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 19:11:58.958620  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 19:11:59.130511  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 19:11:59.131913  369324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 19:11:59.235452  369324 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1014 19:11:59.249174  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 19:11:59.457858  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 19:11:59.628214  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 19:11:59.628432  369324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 19:11:59.742060  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 19:11:59.961179  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1014 19:12:00.071458  369324 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 19:12:00.071511  369324 retry.go:31] will retry after 4.754550054s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 19:12:00.126860  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 19:12:00.126882  369324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 19:12:00.245587  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 19:12:00.458727  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 19:12:00.626154  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 19:12:00.626363  369324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 19:12:00.741411  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 19:12:00.958877  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 19:12:01.125927  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 19:12:01.126729  369324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 19:12:01.241580  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 19:12:01.457232  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 19:12:01.625782  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 19:12:01.625977  369324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 19:12:01.741818  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 19:12:01.957822  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 19:12:02.125714  369324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 19:12:02.129327  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 19:12:02.243798  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 19:12:02.460126  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 19:12:02.626800  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 19:12:02.627129  369324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 19:12:02.742081  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 19:12:02.958752  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 19:12:03.126467  369324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 19:12:03.128024  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 19:12:03.243607  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 19:12:03.459193  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 19:12:03.625831  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 19:12:03.626113  369324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 19:12:03.740995  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 19:12:03.958609  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 19:12:04.126924  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 19:12:04.127814  369324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 19:12:04.241550  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 19:12:04.458521  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 19:12:04.627658  369324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 19:12:04.630185  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 19:12:04.744889  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 19:12:04.827043  369324 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1014 19:12:04.961545  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 19:12:05.132165  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 19:12:05.132271  369324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 19:12:05.244010  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 19:12:05.461548  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 19:12:05.626549  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 19:12:05.631433  369324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 19:12:05.743841  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 19:12:05.960333  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 19:12:05.988718  369324 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.161614962s)
	W1014 19:12:05.988770  369324 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 19:12:05.988799  369324 retry.go:31] will retry after 6.287773659s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 19:12:06.130789  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 19:12:06.130822  369324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 19:12:06.242557  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 19:12:06.460725  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 19:12:06.630726  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 19:12:06.630730  369324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 19:12:06.741591  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 19:12:06.958473  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 19:12:07.130139  369324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 19:12:07.130411  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 19:12:07.242366  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 19:12:07.460110  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 19:12:07.628920  369324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 19:12:07.629601  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 19:12:07.742007  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 19:12:07.959000  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 19:12:08.131354  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 19:12:08.133794  369324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 19:12:08.241480  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 19:12:08.460100  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 19:12:08.922070  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 19:12:08.922327  369324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 19:12:08.922356  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 19:12:08.962636  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 19:12:09.130258  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 19:12:09.132633  369324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 19:12:09.242728  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 19:12:09.463524  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 19:12:09.626843  369324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 19:12:09.627396  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 19:12:09.740788  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 19:12:09.957898  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 19:12:10.126028  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 19:12:10.126616  369324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 19:12:10.241638  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 19:12:10.458524  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 19:12:10.626593  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 19:12:10.627139  369324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 19:12:10.741799  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 19:12:10.958452  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 19:12:11.126130  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 19:12:11.126296  369324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 19:12:11.241098  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 19:12:11.458682  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 19:12:11.626615  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 19:12:11.627970  369324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 19:12:11.740837  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 19:12:11.960193  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 19:12:12.129773  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 19:12:12.130063  369324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 19:12:12.241486  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 19:12:12.277599  369324 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1014 19:12:12.458307  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 19:12:12.628952  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 19:12:12.629977  369324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 19:12:12.744202  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 19:12:12.959197  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 19:12:13.130667  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 19:12:13.131845  369324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 19:12:13.244092  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 19:12:13.358263  369324 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.080617175s)
	W1014 19:12:13.358336  369324 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 19:12:13.358367  369324 retry.go:31] will retry after 7.129620455s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 19:12:13.459726  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 19:12:13.626610  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 19:12:13.626912  369324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 19:12:13.741539  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 19:12:13.959559  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 19:12:14.126710  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 19:12:14.126802  369324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 19:12:14.241151  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 19:12:14.471171  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 19:12:14.626548  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 19:12:14.626937  369324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 19:12:14.741595  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 19:12:14.961785  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 19:12:15.126838  369324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 19:12:15.127191  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 19:12:15.246329  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 19:12:15.459096  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 19:12:15.629799  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 19:12:15.632017  369324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 19:12:15.742369  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 19:12:15.958021  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 19:12:16.131076  369324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 19:12:16.133200  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 19:12:16.240352  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 19:12:16.462807  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 19:12:16.628408  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 19:12:16.629461  369324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 19:12:16.742342  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 19:12:16.958220  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 19:12:17.131326  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 19:12:17.131547  369324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 19:12:17.244018  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 19:12:17.459599  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 19:12:17.626825  369324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 19:12:17.629756  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 19:12:17.741650  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 19:12:17.960014  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 19:12:18.130557  369324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 19:12:18.131303  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 19:12:18.240814  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 19:12:18.462190  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 19:12:18.631767  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 19:12:18.632031  369324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 19:12:18.745908  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 19:12:18.960043  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 19:12:19.136054  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 19:12:19.141271  369324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 19:12:19.242997  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 19:12:19.458804  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 19:12:19.625709  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 19:12:19.626803  369324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 19:12:19.744528  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 19:12:19.959067  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 19:12:20.125552  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 19:12:20.127120  369324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 19:12:20.240863  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 19:12:20.458998  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 19:12:20.489164  369324 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1014 19:12:20.625451  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 19:12:20.627881  369324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 19:12:20.743216  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 19:12:20.960122  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 19:12:21.129902  369324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 19:12:21.131788  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 19:12:21.244459  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 19:12:21.460203  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 19:12:21.630790  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 19:12:21.630997  369324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 19:12:21.661965  369324 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.172750774s)
	W1014 19:12:21.662029  369324 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 19:12:21.662058  369324 retry.go:31] will retry after 17.567145741s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 19:12:21.742955  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 19:12:21.958723  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 19:12:22.130949  369324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 19:12:22.131720  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 19:12:22.241861  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 19:12:22.458051  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 19:12:22.628363  369324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 19:12:22.628868  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 19:12:22.742264  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 19:12:22.958717  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 19:12:23.124893  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 19:12:23.125959  369324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 19:12:23.242578  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 19:12:23.461031  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 19:12:23.627555  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 19:12:23.627580  369324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 19:12:23.741383  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 19:12:23.958977  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 19:12:24.126876  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 19:12:24.128620  369324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 19:12:24.242026  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 19:12:24.459126  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 19:12:24.626166  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 19:12:24.626357  369324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 19:12:24.740690  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 19:12:24.957854  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 19:12:25.125552  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 19:12:25.125775  369324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 19:12:25.242009  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 19:12:25.460628  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 19:12:25.630915  369324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 19:12:25.631687  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 19:12:25.740927  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 19:12:25.960303  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 19:12:26.129440  369324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 19:12:26.129543  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 19:12:26.245921  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 19:12:26.460855  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 19:12:26.627272  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 19:12:26.628329  369324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 19:12:26.741921  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 19:12:26.960491  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 19:12:27.128242  369324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 19:12:27.128262  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 19:12:27.241850  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 19:12:27.458709  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 19:12:27.629613  369324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 19:12:27.630263  369324 kapi.go:107] duration metric: took 45.50844243s to wait for kubernetes.io/minikube-addons=registry ...
	I1014 19:12:27.742464  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 19:12:27.958687  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 19:12:28.129968  369324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 19:12:28.241904  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 19:12:28.458224  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 19:12:28.626461  369324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 19:12:28.749069  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 19:12:28.958119  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 19:12:29.125858  369324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 19:12:29.244816  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 19:12:29.459952  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 19:12:29.626026  369324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 19:12:29.742229  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 19:12:29.961730  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 19:12:30.126720  369324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 19:12:30.241223  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 19:12:30.458746  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 19:12:30.631777  369324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 19:12:30.740990  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 19:12:30.958532  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 19:12:31.125198  369324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 19:12:31.245333  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 19:12:31.459513  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 19:12:31.627588  369324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 19:12:31.745181  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 19:12:31.961902  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 19:12:32.125646  369324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 19:12:32.241453  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 19:12:32.458500  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 19:12:32.626644  369324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 19:12:32.740863  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 19:12:32.958517  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 19:12:33.126869  369324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 19:12:33.242497  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 19:12:33.459690  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 19:12:33.627006  369324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 19:12:33.745178  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 19:12:33.959442  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 19:12:34.128001  369324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 19:12:34.241454  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 19:12:34.460374  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 19:12:34.626290  369324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 19:12:34.742687  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 19:12:34.959102  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 19:12:35.130798  369324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 19:12:35.242803  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 19:12:35.459066  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 19:12:35.627021  369324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 19:12:35.742638  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 19:12:35.958105  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 19:12:36.125325  369324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 19:12:36.240744  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 19:12:36.459659  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 19:12:36.625906  369324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 19:12:36.741209  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 19:12:36.959574  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 19:12:37.126085  369324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 19:12:37.242674  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 19:12:37.460294  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 19:12:37.629202  369324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 19:12:37.741284  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 19:12:37.958672  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 19:12:38.126041  369324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 19:12:38.244236  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 19:12:38.458619  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 19:12:38.630082  369324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 19:12:38.742628  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 19:12:38.958944  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 19:12:39.125332  369324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 19:12:39.230421  369324 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1014 19:12:39.241195  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 19:12:39.461361  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 19:12:39.626418  369324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 19:12:39.744625  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 19:12:39.960830  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1014 19:12:39.981043  369324 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 19:12:39.981078  369324 retry.go:31] will retry after 12.020938437s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 19:12:40.125282  369324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 19:12:40.240726  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 19:12:40.458167  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 19:12:40.629038  369324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 19:12:40.744169  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 19:12:40.960758  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 19:12:41.127128  369324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 19:12:41.241980  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 19:12:41.458622  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 19:12:41.626347  369324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 19:12:41.743419  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 19:12:41.960865  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 19:12:42.130702  369324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 19:12:42.243905  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 19:12:42.459510  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 19:12:42.627574  369324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 19:12:42.741394  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 19:12:42.959461  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 19:12:43.126341  369324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 19:12:43.243957  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 19:12:43.573278  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 19:12:43.628358  369324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 19:12:43.741872  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 19:12:43.961060  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 19:12:44.127198  369324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 19:12:44.240976  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 19:12:44.459543  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 19:12:44.626869  369324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 19:12:44.757460  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 19:12:44.959900  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 19:12:45.128605  369324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 19:12:45.241118  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 19:12:45.461979  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 19:12:45.631893  369324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 19:12:45.741938  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 19:12:45.958085  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 19:12:46.126074  369324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 19:12:46.243570  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 19:12:46.461216  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 19:12:46.626027  369324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 19:12:46.743755  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 19:12:46.958756  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 19:12:47.129099  369324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 19:12:47.241890  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 19:12:47.462583  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 19:12:47.633652  369324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 19:12:47.744183  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 19:12:48.195967  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 19:12:48.196126  369324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 19:12:48.241988  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 19:12:48.460285  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 19:12:48.627409  369324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 19:12:48.742390  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 19:12:48.961156  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 19:12:49.129143  369324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 19:12:49.242628  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 19:12:49.460057  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 19:12:49.625797  369324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 19:12:49.741486  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 19:12:49.966442  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 19:12:50.130704  369324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 19:12:50.244362  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 19:12:50.460918  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 19:12:50.631811  369324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 19:12:50.741755  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 19:12:50.959147  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 19:12:51.125809  369324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 19:12:51.241511  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 19:12:51.459063  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 19:12:51.628650  369324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 19:12:51.742077  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 19:12:51.959576  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 19:12:52.002735  369324 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1014 19:12:52.126764  369324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 19:12:52.242547  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 19:12:52.462075  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 19:12:52.629030  369324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 19:12:52.741290  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1014 19:12:52.954637  369324 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 19:12:52.954680  369324 retry.go:31] will retry after 30.559851927s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 19:12:52.958927  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 19:12:53.128520  369324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 19:12:53.240660  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 19:12:53.457826  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 19:12:53.626010  369324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 19:12:53.745387  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 19:12:53.960243  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 19:12:54.131455  369324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 19:12:54.240623  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 19:12:54.459337  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 19:12:54.632149  369324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 19:12:54.742829  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 19:12:54.958615  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 19:12:55.129933  369324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 19:12:55.241706  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 19:12:55.459697  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 19:12:55.631381  369324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 19:12:55.891962  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 19:12:56.011836  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 19:12:56.128503  369324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 19:12:56.252130  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 19:12:56.462616  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 19:12:56.625969  369324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 19:12:56.748755  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 19:12:56.960209  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 19:12:57.127387  369324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 19:12:57.242254  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 19:12:57.460226  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 19:12:57.626715  369324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 19:12:57.740974  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 19:12:58.071330  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 19:12:58.127678  369324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 19:12:58.242426  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 19:12:58.458163  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 19:12:58.627769  369324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 19:12:58.742451  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 19:12:58.961613  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 19:12:59.127258  369324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 19:12:59.241117  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 19:12:59.458733  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 19:12:59.626133  369324 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 19:12:59.740239  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 19:12:59.958750  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 19:13:00.126400  369324 kapi.go:107] duration metric: took 1m18.004570952s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1014 19:13:00.242254  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 19:13:00.458728  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 19:13:00.741746  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 19:13:00.958139  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 19:13:01.243747  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 19:13:01.461256  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 19:13:01.741381  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 19:13:01.960945  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 19:13:02.243993  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 19:13:02.460376  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 19:13:02.741212  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 19:13:02.965059  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 19:13:03.241376  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 19:13:03.460418  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 19:13:03.742131  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 19:13:03.961133  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 19:13:04.242223  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 19:13:04.458488  369324 kapi.go:107] duration metric: took 1m19.003898445s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1014 19:13:04.460434  369324 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-082251 cluster.
	I1014 19:13:04.461570  369324 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1014 19:13:04.462593  369324 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1014 19:13:04.742291  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 19:13:05.242190  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 19:13:05.741679  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 19:13:06.242058  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 19:13:06.743958  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 19:13:07.242436  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 19:13:07.741875  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 19:13:08.241753  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 19:13:08.742050  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 19:13:09.245627  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 19:13:09.743083  369324 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 19:13:10.241675  369324 kapi.go:107] duration metric: took 1m27.504676052s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1014 19:13:23.515079  369324 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	W1014 19:13:24.211838  369324 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 19:13:24.211915  369324 main.go:141] libmachine: Making call to close driver server
	I1014 19:13:24.211926  369324 main.go:141] libmachine: (addons-082251) Calling .Close
	I1014 19:13:24.212351  369324 main.go:141] libmachine: (addons-082251) DBG | Closing plugin on server side
	I1014 19:13:24.212401  369324 main.go:141] libmachine: Successfully made call to close driver server
	I1014 19:13:24.212423  369324 main.go:141] libmachine: Making call to close connection to plugin binary
	I1014 19:13:24.212440  369324 main.go:141] libmachine: Making call to close driver server
	I1014 19:13:24.212449  369324 main.go:141] libmachine: (addons-082251) Calling .Close
	I1014 19:13:24.212716  369324 main.go:141] libmachine: Successfully made call to close driver server
	I1014 19:13:24.212732  369324 main.go:141] libmachine: Making call to close connection to plugin binary
	W1014 19:13:24.212825  369324 out.go:285] ! Enabling 'inspektor-gadget' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1014 19:13:24.214611  369324 out.go:179] * Enabled addons: registry-creds, nvidia-device-plugin, default-storageclass, ingress-dns, amd-gpu-device-plugin, storage-provisioner, cloud-spanner, metrics-server, yakd, storage-provisioner-rancher, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I1014 19:13:24.215836  369324 addons.go:514] duration metric: took 1m51.296471511s for enable addons: enabled=[registry-creds nvidia-device-plugin default-storageclass ingress-dns amd-gpu-device-plugin storage-provisioner cloud-spanner metrics-server yakd storage-provisioner-rancher volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I1014 19:13:24.215886  369324 start.go:246] waiting for cluster config update ...
	I1014 19:13:24.215915  369324 start.go:255] writing updated cluster config ...
	I1014 19:13:24.216217  369324 ssh_runner.go:195] Run: rm -f paused
	I1014 19:13:24.223057  369324 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1014 19:13:24.228401  369324 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-rpkbj" in "kube-system" namespace to be "Ready" or be gone ...
	I1014 19:13:24.235413  369324 pod_ready.go:94] pod "coredns-66bc5c9577-rpkbj" is "Ready"
	I1014 19:13:24.235437  369324 pod_ready.go:86] duration metric: took 7.009944ms for pod "coredns-66bc5c9577-rpkbj" in "kube-system" namespace to be "Ready" or be gone ...
	I1014 19:13:24.329082  369324 pod_ready.go:83] waiting for pod "etcd-addons-082251" in "kube-system" namespace to be "Ready" or be gone ...
	I1014 19:13:24.334757  369324 pod_ready.go:94] pod "etcd-addons-082251" is "Ready"
	I1014 19:13:24.334782  369324 pod_ready.go:86] duration metric: took 5.671875ms for pod "etcd-addons-082251" in "kube-system" namespace to be "Ready" or be gone ...
	I1014 19:13:24.336623  369324 pod_ready.go:83] waiting for pod "kube-apiserver-addons-082251" in "kube-system" namespace to be "Ready" or be gone ...
	I1014 19:13:24.341948  369324 pod_ready.go:94] pod "kube-apiserver-addons-082251" is "Ready"
	I1014 19:13:24.341970  369324 pod_ready.go:86] duration metric: took 5.324079ms for pod "kube-apiserver-addons-082251" in "kube-system" namespace to be "Ready" or be gone ...
	I1014 19:13:24.344188  369324 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-082251" in "kube-system" namespace to be "Ready" or be gone ...
	I1014 19:13:24.628669  369324 pod_ready.go:94] pod "kube-controller-manager-addons-082251" is "Ready"
	I1014 19:13:24.628709  369324 pod_ready.go:86] duration metric: took 284.498024ms for pod "kube-controller-manager-addons-082251" in "kube-system" namespace to be "Ready" or be gone ...
	I1014 19:13:24.827135  369324 pod_ready.go:83] waiting for pod "kube-proxy-rl7gc" in "kube-system" namespace to be "Ready" or be gone ...
	I1014 19:13:25.227824  369324 pod_ready.go:94] pod "kube-proxy-rl7gc" is "Ready"
	I1014 19:13:25.227857  369324 pod_ready.go:86] duration metric: took 400.692796ms for pod "kube-proxy-rl7gc" in "kube-system" namespace to be "Ready" or be gone ...
	I1014 19:13:25.427755  369324 pod_ready.go:83] waiting for pod "kube-scheduler-addons-082251" in "kube-system" namespace to be "Ready" or be gone ...
	I1014 19:13:25.827593  369324 pod_ready.go:94] pod "kube-scheduler-addons-082251" is "Ready"
	I1014 19:13:25.827622  369324 pod_ready.go:86] duration metric: took 399.839672ms for pod "kube-scheduler-addons-082251" in "kube-system" namespace to be "Ready" or be gone ...
	I1014 19:13:25.827635  369324 pod_ready.go:40] duration metric: took 1.604537185s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1014 19:13:25.872657  369324 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1014 19:13:25.874460  369324 out.go:179] * Done! kubectl is now configured to use "addons-082251" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 14 19:16:29 addons-082251 crio[820]: time="2025-10-14 19:16:29.459716419Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1760469389459683790,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:598025,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a9828598-a7f7-4899-908d-7d974100c29b name=/runtime.v1.ImageService/ImageFsInfo
	Oct 14 19:16:29 addons-082251 crio[820]: time="2025-10-14 19:16:29.460399820Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=80de9162-7946-48a5-8a46-5482c57f9ed6 name=/runtime.v1.RuntimeService/ListContainers
	Oct 14 19:16:29 addons-082251 crio[820]: time="2025-10-14 19:16:29.460563789Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=80de9162-7946-48a5-8a46-5482c57f9ed6 name=/runtime.v1.RuntimeService/ListContainers
	Oct 14 19:16:29 addons-082251 crio[820]: time="2025-10-14 19:16:29.461387743Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:00771d1df731a4e3c2cfe5ac499b33a40156618565443e9e29fabe13206489f8,PodSandboxId:06661c44331d02f07522e6c274180134c75ffc70621648fc0ef8d827019e4103,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:61e01287e546aac28a3f56839c136b31f590273f3b41187a36f46f6a03bbfe22,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5e7abcdd20216bbeedf1369529564ffd60f830ed3540c477938ca580b645dff5,State:CONTAINER_RUNNING,CreatedAt:1760469244638589947,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3734bf2f-d6f3-4fc4-a164-aa3a5ecee661,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0945d5214284b9b568eabf341bcaf43f6372e821c67bd52d2830b38e11822855,PodSandboxId:7ba9a86c0015a2481814eb7f5561ba4a1aaec65ced7c2d6f8d497f82bd9c0bd8,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1760469210212667520,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 82e239e4-46e4-4a5b-913a-74be19e087ce,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bea069b4992691f4b6ba4aed696d065845d7fced8ae40962585f29a635d80af2,PodSandboxId:4b3156b8bf196eb3a00ff9915290d79dd86ed5d17407c2959b3a476639f74546,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:1b044f6dcac3afbb59e05d98463f1dec6f3d3fb99940bc12ca5d80270358e3bd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c44d76c3213ea875be38abca61688c1173da6ee1815f1ce330a2d93add531e32,State:CONTAINER_RUNNING,CreatedAt:1760469178808620415,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-675c5ddd98-bxmmg,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 8a73f036-a6a2-4c06-bade-21ff639076ac,},Annotations:map[string]string{io.kubernetes.
container.hash: 36aef26,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:531aa741fe4a7eb4981a2a86a7162f9ac7b78b91aa6567df0ee44deee09cd805,PodSandboxId:9edf8f9443f2c7779800603a20db5c8486e1ee74c1f66009aeeb1dad3a5dbd52,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:3d671cf20a35cd94efc5dcd484970779eb21e7938c98fbc3673693b8a117cf39,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:08cfe302fe
afeabe4c2747ba112aa93917a7468cdd19a8835b48eb2ac88a7bf2,State:CONTAINER_EXITED,CreatedAt:1760469165478867697,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-77zpp,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: efd725df-301a-4260-92f0-d146ca773316,},Annotations:map[string]string{io.kubernetes.container.hash: 166f2edf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:437e3e64f7d50a72f1e209f7321be34c33627fc7a6cd7fb41b307397aa60d490,PodSandboxId:61d11c93a1452f93f6b835266ab066da5cc0e4c1d5b2ef5ae6ef645573530b89,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:3d671cf20a35cd94efc5dcd484970779eb21e7938c98fbc3673693b8a117cf39,Annotations:map[string]string{},UserSpecifiedImage:,R
untimeHandler:,},ImageRef:08cfe302feafeabe4c2747ba112aa93917a7468cdd19a8835b48eb2ac88a7bf2,State:CONTAINER_EXITED,CreatedAt:1760469165345617835,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-fn9j5,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 78bc62d5-61d0-4fdb-a2f1-47d2107905af,},Annotations:map[string]string{io.kubernetes.container.hash: 3193dfde,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:57c3ebde6479076d124aaf061b1ff7d57ae297cb4eadfe9b7359e02ef772f911,PodSandboxId:ab0392e4c5333c0696aa8c50d4b7eb69d81fdadf7b225936216a0e390c2d6ced,Metadata:&ContainerMetadata{Name:gadget,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/inspektor-gadget/inspektor-gadget@sha256:db9cb3dd78ffab71eb8746afcb57bd3859993cb150a76d8b7cebe79441c702cb,Annotations:map[string]s
tring{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38dca7434d5f28a7ced293ea76279adbabf08af32ee48a29bab2668b8ea7401f,State:CONTAINER_RUNNING,CreatedAt:1760469156901124243,Labels:map[string]string{io.kubernetes.container.name: gadget,io.kubernetes.pod.name: gadget-hqcbg,io.kubernetes.pod.namespace: gadget,io.kubernetes.pod.uid: 30556161-b0ef-471f-964f-a6eca37b15a1,},Annotations:map[string]string{io.kubernetes.container.hash: f68894e6,io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/cleanup\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:072f9f911b9dce7da1e169e224bda2a9280145ca691dd03ce873eaaff8194934,PodSandboxId:e109143b60b1c49c0e232686aea9f9ec71d3d2708bf15ad6a85a53211626bbd9,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-i
ngress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1760469143284320066,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 078e3d8d-9557-476e-bdb3-72041038eef4,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5890b48dc1784be7fb56dcbe06f1507853e32d8bd2fa7f582a295cb46b559ff4,PodSandboxId:eb99e4db790e1536fb4ee9b4fe33874b235051d338c5561
d0413895677a8801e,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1760469122581752201,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-wjxgm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3866b9b9-cfa5-423e-aadf-3969d88023ec,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c5bab87a6fcc122ce15fa8a001121690012344b70ec21f5b4cf05f488a2b80f4,PodSandboxId:fef8503
61af1b0fdfd2a873842ffff378363f5261f2ef20acda871ebd11da1ae,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1760469100294979259,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1426d20a-4d3a-4473-b6b5-e213b9eb7c6d,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e944fb0c52e32eee8eddc847d6a61fe9137c49176522042d9e90c5ab793a829,PodSandboxId:15241db503a39379aad
4f61ea71fb80d1bd9295426627f318b6c2327142291d3,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1760469093948274828,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-rpkbj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c31ec4cb-a1a3-45d4-bb9b-5f6ea1abac04,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"pr
otocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:721cdc801c079b4b6bc705898f6186ee113594b49f47f0bbb3ec3805c2708a64,PodSandboxId:9ebc2bd9d93acd968aba94da5213aef3bf47833b6ef12fd44f2a09dfcd949aab,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1760469092648625266,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rl7gc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 52a7f838-8ad0-4fa9-8bb4-a0b1d45eb94c,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.
container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:00c26868bff1a52273e6c9f88314b67904670d380cb420e49d9393ff47fb83f4,PodSandboxId:cd86488c7745a45bb9d3982abf8e865f569cedf768888cd0a0f4adc434f42c71,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1760469080949061075,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-082251,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a8f1830a9c0212cd752acd7e04b95ca1,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports:
[{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:82d16cf5c6b0868b906ec52fbea853e79b7acde576e85ddf86ec8e2041a2c6af,PodSandboxId:6e9b04d9adf4a00d3575c68d1496b1d41eddd7b9949bd4b0f96453e8a00bb448,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1760469080908695080,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-082251,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f707512f8e47a5a596e4abce62911
25e,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ccf835a80dedc99eefba9d637d20ec6dcd214642c450bee1433cce806d209f08,PodSandboxId:0c69d8cb76201e03267dd059ac3f672057d38c23b3be42f480b5d4eb5038bedb,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1760469080911094389,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-082251,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: a0e0d501350e2ff0d6ed28805531de36,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e72c7a6929986499ab7c51b27ffb0ce0410617e85fcec40399e0ed515a68d7dc,PodSandboxId:80f248871a70b700b7b697bf8d3f40e6e389500c8a3bfe2f25e33d1ae529775a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1760469080899039968,Labels:map[string]string{io.k
ubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-082251,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9db629f2938736e766cbeafb458918b1,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=80de9162-7946-48a5-8a46-5482c57f9ed6 name=/runtime.v1.RuntimeService/ListContainers
	Oct 14 19:16:29 addons-082251 crio[820]: time="2025-10-14 19:16:29.482322173Z" level=debug msg="Content-Type from manifest GET is \"application/vnd.docker.distribution.manifest.list.v2+json\"" file="docker/docker_client.go:964"
	Oct 14 19:16:29 addons-082251 crio[820]: time="2025-10-14 19:16:29.482901489Z" level=debug msg="GET https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86" file="docker/docker_client.go:631"
	Oct 14 19:16:29 addons-082251 crio[820]: time="2025-10-14 19:16:29.500008839Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=c30703f9-1252-4d0b-b942-f8c80e1460aa name=/runtime.v1.RuntimeService/Version
	Oct 14 19:16:29 addons-082251 crio[820]: time="2025-10-14 19:16:29.500115472Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=c30703f9-1252-4d0b-b942-f8c80e1460aa name=/runtime.v1.RuntimeService/Version
	Oct 14 19:16:29 addons-082251 crio[820]: time="2025-10-14 19:16:29.501874210Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d067234f-4d27-4994-aaa2-5f2bda2835e5 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 14 19:16:29 addons-082251 crio[820]: time="2025-10-14 19:16:29.503998325Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1760469389503968878,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:598025,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d067234f-4d27-4994-aaa2-5f2bda2835e5 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 14 19:16:29 addons-082251 crio[820]: time="2025-10-14 19:16:29.505647880Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5fe30946-31bd-47b0-a649-035936f49dc9 name=/runtime.v1.RuntimeService/ListContainers
	Oct 14 19:16:29 addons-082251 crio[820]: time="2025-10-14 19:16:29.505721181Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5fe30946-31bd-47b0-a649-035936f49dc9 name=/runtime.v1.RuntimeService/ListContainers
	Oct 14 19:16:29 addons-082251 crio[820]: time="2025-10-14 19:16:29.506040651Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:&PodSandboxFilter{Id:,State:&PodSandboxStateValue{State:SANDBOX_READY,},LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=dd946c25-24e7-49c6-83a5-15569475a6af name=/runtime.v1.RuntimeService/ListPodSandbox
	Oct 14 19:16:29 addons-082251 crio[820]: time="2025-10-14 19:16:29.506339792Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:2d15419b0cf9552756e719a141aebe0d94748b2b2cb655c586c330ed4d74e705,Metadata:&PodSandboxMetadata{Name:hello-world-app-5d498dc89-kwrsq,Uid:544a6372-1330-4582-899c-4095635e7e74,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1760469388485023834,Labels:map[string]string{app: hello-world-app,io.kubernetes.container.name: POD,io.kubernetes.pod.name: hello-world-app-5d498dc89-kwrsq,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 544a6372-1330-4582-899c-4095635e7e74,pod-template-hash: 5d498dc89,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-10-14T19:16:28.165895764Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:06661c44331d02f07522e6c274180134c75ffc70621648fc0ef8d827019e4103,Metadata:&PodSandboxMetadata{Name:nginx,Uid:3734bf2f-d6f3-4fc4-a164-aa3a5ecee661,Namespace:default,Attempt:0,},St
ate:SANDBOX_READY,CreatedAt:1760469239457999578,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3734bf2f-d6f3-4fc4-a164-aa3a5ecee661,run: nginx,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-10-14T19:13:59.138084673Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:7ba9a86c0015a2481814eb7f5561ba4a1aaec65ced7c2d6f8d497f82bd9c0bd8,Metadata:&PodSandboxMetadata{Name:busybox,Uid:82e239e4-46e4-4a5b-913a-74be19e087ce,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1760469206842243960,Labels:map[string]string{integration-test: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 82e239e4-46e4-4a5b-913a-74be19e087ce,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-10-14T19:13:26.523014288Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:4b3156b8bf196eb3a00ff
9915290d79dd86ed5d17407c2959b3a476639f74546,Metadata:&PodSandboxMetadata{Name:ingress-nginx-controller-675c5ddd98-bxmmg,Uid:8a73f036-a6a2-4c06-bade-21ff639076ac,Namespace:ingress-nginx,Attempt:0,},State:SANDBOX_READY,CreatedAt:1760469166153200824,Labels:map[string]string{app.kubernetes.io/component: controller,app.kubernetes.io/instance: ingress-nginx,app.kubernetes.io/name: ingress-nginx,gcp-auth-skip-secret: true,io.kubernetes.container.name: POD,io.kubernetes.pod.name: ingress-nginx-controller-675c5ddd98-bxmmg,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 8a73f036-a6a2-4c06-bade-21ff639076ac,pod-template-hash: 675c5ddd98,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-10-14T19:11:41.932893578Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:ab0392e4c5333c0696aa8c50d4b7eb69d81fdadf7b225936216a0e390c2d6ced,Metadata:&PodSandboxMetadata{Name:gadget-hqcbg,Uid:30556161-b0ef-471f-964f-a6eca37b15a1,Namespace:gadget,Attempt:0,},State:SANDBOX_READY,CreatedAt:17
60469101365376897,Labels:map[string]string{controller-revision-hash: d797fcb64,io.kubernetes.container.name: POD,io.kubernetes.pod.name: gadget-hqcbg,io.kubernetes.pod.namespace: gadget,io.kubernetes.pod.uid: 30556161-b0ef-471f-964f-a6eca37b15a1,k8s-app: gadget,pod-template-generation: 1,},Annotations:map[string]string{container.apparmor.security.beta.kubernetes.io/gadget: unconfined,kubernetes.io/config.seen: 2025-10-14T19:11:40.812561083Z,kubernetes.io/config.source: api,prometheus.io/path: /metrics,prometheus.io/port: 2223,prometheus.io/scrape: true,},RuntimeHandler:,},&PodSandbox{Id:fef850361af1b0fdfd2a873842ffff378363f5261f2ef20acda871ebd11da1ae,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:1426d20a-4d3a-4473-b6b5-e213b9eb7c6d,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1760469099107784632,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io
.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1426d20a-4d3a-4473-b6b5-e213b9eb7c6d,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2025-10-14T19:11:38.744636584Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:e109143b60b1c49c0e232686aea9f9ec71d3d2708bf15ad6a85a532
11626bbd9,Metadata:&PodSandboxMetadata{Name:kube-ingress-dns-minikube,Uid:078e3d8d-9557-476e-bdb3-72041038eef4,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1760469098871513471,Labels:map[string]string{app: minikube-ingress-dns,app.kubernetes.io/part-of: kube-system,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 078e3d8d-9557-476e-bdb3-72041038eef4,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"app\":\"minikube-ingress-dns\",\"app.kubernetes.io/part-of\":\"kube-system\"},\"name\":\"kube-ingress-dns-minikube\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"env\":[{\"name\":\"DNS_PORT\",\"value\":\"53\"},{\"name\":\"POD_IP\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"status.podIP\"}}}],\"image\":\"docker.io/kicbase/minikube-ingress-dns:0.0.4@sha256:d7c3fd25a0e
a8fa62d4096eda202b3fc69d994b01ed6ab431def629f16ba1a89\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"minikube-ingress-dns\",\"ports\":[{\"containerPort\":53,\"hostPort\":53,\"protocol\":\"UDP\"}],\"volumeMounts\":[{\"mountPath\":\"/config\",\"name\":\"minikube-ingress-dns-config-volume\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"minikube-ingress-dns\",\"volumes\":[{\"configMap\":{\"name\":\"minikube-ingress-dns\"},\"name\":\"minikube-ingress-dns-config-volume\"}]}}\n,kubernetes.io/config.seen: 2025-10-14T19:11:38.501638534Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:eb99e4db790e1536fb4ee9b4fe33874b235051d338c5561d0413895677a8801e,Metadata:&PodSandboxMetadata{Name:amd-gpu-device-plugin-wjxgm,Uid:3866b9b9-cfa5-423e-aadf-3969d88023ec,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1760469096393830319,Labels:map[string]string{controller-revision-hash: 7f87d6fd8d,io.kubernetes.container.name: POD,io.kubernetes.pod.name: amd-gpu-device-plugin-wjxgm,io.kubernetes.p
od.namespace: kube-system,io.kubernetes.pod.uid: 3866b9b9-cfa5-423e-aadf-3969d88023ec,k8s-app: amd-gpu-device-plugin,name: amd-gpu-device-plugin,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-10-14T19:11:36.028386541Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:15241db503a39379aad4f61ea71fb80d1bd9295426627f318b6c2327142291d3,Metadata:&PodSandboxMetadata{Name:coredns-66bc5c9577-rpkbj,Uid:c31ec4cb-a1a3-45d4-bb9b-5f6ea1abac04,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1760469093054941240,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-66bc5c9577-rpkbj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c31ec4cb-a1a3-45d4-bb9b-5f6ea1abac04,k8s-app: kube-dns,pod-template-hash: 66bc5c9577,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-10-14T19:11:32.674075422Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:9ebc2bd9d93acd968aba94da5213ae
f3bf47833b6ef12fd44f2a09dfcd949aab,Metadata:&PodSandboxMetadata{Name:kube-proxy-rl7gc,Uid:52a7f838-8ad0-4fa9-8bb4-a0b1d45eb94c,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1760469092530299707,Labels:map[string]string{controller-revision-hash: 66486579fc,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-rl7gc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 52a7f838-8ad0-4fa9-8bb4-a0b1d45eb94c,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-10-14T19:11:32.198874571Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:cd86488c7745a45bb9d3982abf8e865f569cedf768888cd0a0f4adc434f42c71,Metadata:&PodSandboxMetadata{Name:kube-scheduler-addons-082251,Uid:a8f1830a9c0212cd752acd7e04b95ca1,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1760469080693150040,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-schedul
er-addons-082251,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a8f1830a9c0212cd752acd7e04b95ca1,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: a8f1830a9c0212cd752acd7e04b95ca1,kubernetes.io/config.seen: 2025-10-14T19:11:19.859318662Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:6e9b04d9adf4a00d3575c68d1496b1d41eddd7b9949bd4b0f96453e8a00bb448,Metadata:&PodSandboxMetadata{Name:kube-apiserver-addons-082251,Uid:f707512f8e47a5a596e4abce6291125e,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1760469080689753203,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-addons-082251,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f707512f8e47a5a596e4abce6291125e,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.214:8443,kubernetes.io/config.hash: f707512f8e47a5a596e4abc
e6291125e,kubernetes.io/config.seen: 2025-10-14T19:11:19.859316867Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:80f248871a70b700b7b697bf8d3f40e6e389500c8a3bfe2f25e33d1ae529775a,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-addons-082251,Uid:9db629f2938736e766cbeafb458918b1,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1760469080686022889,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-addons-082251,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9db629f2938736e766cbeafb458918b1,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 9db629f2938736e766cbeafb458918b1,kubernetes.io/config.seen: 2025-10-14T19:11:19.859317912Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:0c69d8cb76201e03267dd059ac3f672057d38c23b3be42f480b5d4eb5038bedb,Metadata:&PodSandboxMetadata{Name:etcd-addons-082251,Uid:a0e0d501350e2ff
0d6ed28805531de36,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1760469080684847819,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-addons-082251,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a0e0d501350e2ff0d6ed28805531de36,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.214:2379,kubernetes.io/config.hash: a0e0d501350e2ff0d6ed28805531de36,kubernetes.io/config.seen: 2025-10-14T19:11:19.859313024Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=dd946c25-24e7-49c6-83a5-15569475a6af name=/runtime.v1.RuntimeService/ListPodSandbox
	Oct 14 19:16:29 addons-082251 crio[820]: time="2025-10-14 19:16:29.507100003Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:00771d1df731a4e3c2cfe5ac499b33a40156618565443e9e29fabe13206489f8,PodSandboxId:06661c44331d02f07522e6c274180134c75ffc70621648fc0ef8d827019e4103,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:61e01287e546aac28a3f56839c136b31f590273f3b41187a36f46f6a03bbfe22,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5e7abcdd20216bbeedf1369529564ffd60f830ed3540c477938ca580b645dff5,State:CONTAINER_RUNNING,CreatedAt:1760469244638589947,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3734bf2f-d6f3-4fc4-a164-aa3a5ecee661,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0945d5214284b9b568eabf341bcaf43f6372e821c67bd52d2830b38e11822855,PodSandboxId:7ba9a86c0015a2481814eb7f5561ba4a1aaec65ced7c2d6f8d497f82bd9c0bd8,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1760469210212667520,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 82e239e4-46e4-4a5b-913a-74be19e087ce,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bea069b4992691f4b6ba4aed696d065845d7fced8ae40962585f29a635d80af2,PodSandboxId:4b3156b8bf196eb3a00ff9915290d79dd86ed5d17407c2959b3a476639f74546,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:1b044f6dcac3afbb59e05d98463f1dec6f3d3fb99940bc12ca5d80270358e3bd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c44d76c3213ea875be38abca61688c1173da6ee1815f1ce330a2d93add531e32,State:CONTAINER_RUNNING,CreatedAt:1760469178808620415,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-675c5ddd98-bxmmg,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 8a73f036-a6a2-4c06-bade-21ff639076ac,},Annotations:map[string]string{io.kubernetes.
container.hash: 36aef26,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:531aa741fe4a7eb4981a2a86a7162f9ac7b78b91aa6567df0ee44deee09cd805,PodSandboxId:9edf8f9443f2c7779800603a20db5c8486e1ee74c1f66009aeeb1dad3a5dbd52,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:3d671cf20a35cd94efc5dcd484970779eb21e7938c98fbc3673693b8a117cf39,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:08cfe302fe
afeabe4c2747ba112aa93917a7468cdd19a8835b48eb2ac88a7bf2,State:CONTAINER_EXITED,CreatedAt:1760469165478867697,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-77zpp,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: efd725df-301a-4260-92f0-d146ca773316,},Annotations:map[string]string{io.kubernetes.container.hash: 166f2edf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:437e3e64f7d50a72f1e209f7321be34c33627fc7a6cd7fb41b307397aa60d490,PodSandboxId:61d11c93a1452f93f6b835266ab066da5cc0e4c1d5b2ef5ae6ef645573530b89,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:3d671cf20a35cd94efc5dcd484970779eb21e7938c98fbc3673693b8a117cf39,Annotations:map[string]string{},UserSpecifiedImage:,R
untimeHandler:,},ImageRef:08cfe302feafeabe4c2747ba112aa93917a7468cdd19a8835b48eb2ac88a7bf2,State:CONTAINER_EXITED,CreatedAt:1760469165345617835,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-fn9j5,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 78bc62d5-61d0-4fdb-a2f1-47d2107905af,},Annotations:map[string]string{io.kubernetes.container.hash: 3193dfde,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:57c3ebde6479076d124aaf061b1ff7d57ae297cb4eadfe9b7359e02ef772f911,PodSandboxId:ab0392e4c5333c0696aa8c50d4b7eb69d81fdadf7b225936216a0e390c2d6ced,Metadata:&ContainerMetadata{Name:gadget,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/inspektor-gadget/inspektor-gadget@sha256:db9cb3dd78ffab71eb8746afcb57bd3859993cb150a76d8b7cebe79441c702cb,Annotations:map[string]s
tring{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38dca7434d5f28a7ced293ea76279adbabf08af32ee48a29bab2668b8ea7401f,State:CONTAINER_RUNNING,CreatedAt:1760469156901124243,Labels:map[string]string{io.kubernetes.container.name: gadget,io.kubernetes.pod.name: gadget-hqcbg,io.kubernetes.pod.namespace: gadget,io.kubernetes.pod.uid: 30556161-b0ef-471f-964f-a6eca37b15a1,},Annotations:map[string]string{io.kubernetes.container.hash: f68894e6,io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/cleanup\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:072f9f911b9dce7da1e169e224bda2a9280145ca691dd03ce873eaaff8194934,PodSandboxId:e109143b60b1c49c0e232686aea9f9ec71d3d2708bf15ad6a85a53211626bbd9,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-i
ngress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1760469143284320066,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 078e3d8d-9557-476e-bdb3-72041038eef4,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5890b48dc1784be7fb56dcbe06f1507853e32d8bd2fa7f582a295cb46b559ff4,PodSandboxId:eb99e4db790e1536fb4ee9b4fe33874b235051d338c5561
d0413895677a8801e,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1760469122581752201,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-wjxgm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3866b9b9-cfa5-423e-aadf-3969d88023ec,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c5bab87a6fcc122ce15fa8a001121690012344b70ec21f5b4cf05f488a2b80f4,PodSandboxId:fef8503
61af1b0fdfd2a873842ffff378363f5261f2ef20acda871ebd11da1ae,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1760469100294979259,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1426d20a-4d3a-4473-b6b5-e213b9eb7c6d,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e944fb0c52e32eee8eddc847d6a61fe9137c49176522042d9e90c5ab793a829,PodSandboxId:15241db503a39379aad
4f61ea71fb80d1bd9295426627f318b6c2327142291d3,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1760469093948274828,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-rpkbj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c31ec4cb-a1a3-45d4-bb9b-5f6ea1abac04,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"pr
otocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:721cdc801c079b4b6bc705898f6186ee113594b49f47f0bbb3ec3805c2708a64,PodSandboxId:9ebc2bd9d93acd968aba94da5213aef3bf47833b6ef12fd44f2a09dfcd949aab,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1760469092648625266,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rl7gc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 52a7f838-8ad0-4fa9-8bb4-a0b1d45eb94c,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.
container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:00c26868bff1a52273e6c9f88314b67904670d380cb420e49d9393ff47fb83f4,PodSandboxId:cd86488c7745a45bb9d3982abf8e865f569cedf768888cd0a0f4adc434f42c71,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1760469080949061075,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-082251,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a8f1830a9c0212cd752acd7e04b95ca1,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports:
[{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:82d16cf5c6b0868b906ec52fbea853e79b7acde576e85ddf86ec8e2041a2c6af,PodSandboxId:6e9b04d9adf4a00d3575c68d1496b1d41eddd7b9949bd4b0f96453e8a00bb448,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1760469080908695080,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-082251,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f707512f8e47a5a596e4abce62911
25e,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ccf835a80dedc99eefba9d637d20ec6dcd214642c450bee1433cce806d209f08,PodSandboxId:0c69d8cb76201e03267dd059ac3f672057d38c23b3be42f480b5d4eb5038bedb,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1760469080911094389,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-082251,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: a0e0d501350e2ff0d6ed28805531de36,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e72c7a6929986499ab7c51b27ffb0ce0410617e85fcec40399e0ed515a68d7dc,PodSandboxId:80f248871a70b700b7b697bf8d3f40e6e389500c8a3bfe2f25e33d1ae529775a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1760469080899039968,Labels:map[string]string{io.k
ubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-082251,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9db629f2938736e766cbeafb458918b1,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=5fe30946-31bd-47b0-a649-035936f49dc9 name=/runtime.v1.RuntimeService/ListContainers
	Oct 14 19:16:29 addons-082251 crio[820]: time="2025-10-14 19:16:29.507235538Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:&ContainerStateValue{State:CONTAINER_RUNNING,},PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=91ba0ea7-e6d5-41b3-802a-6407b8c13972 name=/runtime.v1.RuntimeService/ListContainers
	Oct 14 19:16:29 addons-082251 crio[820]: time="2025-10-14 19:16:29.507596136Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=91ba0ea7-e6d5-41b3-802a-6407b8c13972 name=/runtime.v1.RuntimeService/ListContainers
	Oct 14 19:16:29 addons-082251 crio[820]: time="2025-10-14 19:16:29.507845590Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:00771d1df731a4e3c2cfe5ac499b33a40156618565443e9e29fabe13206489f8,PodSandboxId:06661c44331d02f07522e6c274180134c75ffc70621648fc0ef8d827019e4103,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:61e01287e546aac28a3f56839c136b31f590273f3b41187a36f46f6a03bbfe22,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5e7abcdd20216bbeedf1369529564ffd60f830ed3540c477938ca580b645dff5,State:CONTAINER_RUNNING,CreatedAt:1760469244638589947,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3734bf2f-d6f3-4fc4-a164-aa3a5ecee661,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0945d5214284b9b568eabf341bcaf43f6372e821c67bd52d2830b38e11822855,PodSandboxId:7ba9a86c0015a2481814eb7f5561ba4a1aaec65ced7c2d6f8d497f82bd9c0bd8,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1760469210212667520,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 82e239e4-46e4-4a5b-913a-74be19e087ce,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bea069b4992691f4b6ba4aed696d065845d7fced8ae40962585f29a635d80af2,PodSandboxId:4b3156b8bf196eb3a00ff9915290d79dd86ed5d17407c2959b3a476639f74546,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:1b044f6dcac3afbb59e05d98463f1dec6f3d3fb99940bc12ca5d80270358e3bd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c44d76c3213ea875be38abca61688c1173da6ee1815f1ce330a2d93add531e32,State:CONTAINER_RUNNING,CreatedAt:1760469178808620415,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-675c5ddd98-bxmmg,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 8a73f036-a6a2-4c06-bade-21ff639076ac,},Annotations:map[string]string{io.kubernetes.
container.hash: 36aef26,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:57c3ebde6479076d124aaf061b1ff7d57ae297cb4eadfe9b7359e02ef772f911,PodSandboxId:ab0392e4c5333c0696aa8c50d4b7eb69d81fdadf7b225936216a0e390c2d6ced,Metadata:&ContainerMetadata{Name:gadget,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/inspektor-gadget/inspektor-gadget@sha256:db9cb3dd78ffab71eb8746afcb57bd3859993cb150a76d8b7cebe79441c702cb,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38dca7434d5f28a7ce
d293ea76279adbabf08af32ee48a29bab2668b8ea7401f,State:CONTAINER_RUNNING,CreatedAt:1760469156901124243,Labels:map[string]string{io.kubernetes.container.name: gadget,io.kubernetes.pod.name: gadget-hqcbg,io.kubernetes.pod.namespace: gadget,io.kubernetes.pod.uid: 30556161-b0ef-471f-964f-a6eca37b15a1,},Annotations:map[string]string{io.kubernetes.container.hash: f68894e6,io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/cleanup\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:072f9f911b9dce7da1e169e224bda2a9280145ca691dd03ce873eaaff8194934,PodSandboxId:e109143b60b1c49c0e232686aea9f9ec71d3d2708bf15ad6a85a53211626bbd9,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c825
3912eaae7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1760469143284320066,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 078e3d8d-9557-476e-bdb3-72041038eef4,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5890b48dc1784be7fb56dcbe06f1507853e32d8bd2fa7f582a295cb46b559ff4,PodSandboxId:eb99e4db790e1536fb4ee9b4fe33874b235051d338c5561d0413895677a8801e,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,
Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1760469122581752201,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-wjxgm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3866b9b9-cfa5-423e-aadf-3969d88023ec,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c5bab87a6fcc122ce15fa8a001121690012344b70ec21f5b4cf05f488a2b80f4,PodSandboxId:fef850361af1b0fdfd2a873842ffff378363f5261f2ef20acda871ebd11da1ae,Metadata:&Conta
inerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1760469100294979259,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1426d20a-4d3a-4473-b6b5-e213b9eb7c6d,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e944fb0c52e32eee8eddc847d6a61fe9137c49176522042d9e90c5ab793a829,PodSandboxId:15241db503a39379aad4f61ea71fb80d1bd9295426627f318b6c2327142291d3,Metadata:&ContainerMetadata
{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1760469093948274828,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-rpkbj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c31ec4cb-a1a3-45d4-bb9b-5f6ea1abac04,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.
container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:721cdc801c079b4b6bc705898f6186ee113594b49f47f0bbb3ec3805c2708a64,PodSandboxId:9ebc2bd9d93acd968aba94da5213aef3bf47833b6ef12fd44f2a09dfcd949aab,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1760469092648625266,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rl7gc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 52a7f838-8ad0-4fa9-8bb4-a0b1d45eb94c,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:00c26868bff1a52273e6c9f88314b67904670d380cb420e49d9393ff47fb83f4,PodSandboxId:cd86488c7745a45bb9d3982abf8e865f569cedf768888cd0a0f4adc434f42c71,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1760469080949061075,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-082251,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a8f1830a9c0212cd752acd7e04b95ca1,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"pr
otocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:82d16cf5c6b0868b906ec52fbea853e79b7acde576e85ddf86ec8e2041a2c6af,PodSandboxId:6e9b04d9adf4a00d3575c68d1496b1d41eddd7b9949bd4b0f96453e8a00bb448,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1760469080908695080,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-082251,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f707512f8e47a5a596e4abce6291125e,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c
7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ccf835a80dedc99eefba9d637d20ec6dcd214642c450bee1433cce806d209f08,PodSandboxId:0c69d8cb76201e03267dd059ac3f672057d38c23b3be42f480b5d4eb5038bedb,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1760469080911094389,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-082251,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a0e0d501350e2ff0d6ed28805531
de36,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e72c7a6929986499ab7c51b27ffb0ce0410617e85fcec40399e0ed515a68d7dc,PodSandboxId:80f248871a70b700b7b697bf8d3f40e6e389500c8a3bfe2f25e33d1ae529775a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1760469080899039968,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name:
kube-controller-manager-addons-082251,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9db629f2938736e766cbeafb458918b1,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=91ba0ea7-e6d5-41b3-802a-6407b8c13972 name=/runtime.v1.RuntimeService/ListContainers
	Oct 14 19:16:29 addons-082251 crio[820]: time="2025-10-14 19:16:29.543721458Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=82474143-f48a-4b50-87eb-e68faa55659b name=/runtime.v1.RuntimeService/Version
	Oct 14 19:16:29 addons-082251 crio[820]: time="2025-10-14 19:16:29.543794239Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=82474143-f48a-4b50-87eb-e68faa55659b name=/runtime.v1.RuntimeService/Version
	Oct 14 19:16:29 addons-082251 crio[820]: time="2025-10-14 19:16:29.545679069Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=b8e37784-9a86-489f-b4d0-ff87df16bd11 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 14 19:16:29 addons-082251 crio[820]: time="2025-10-14 19:16:29.548020852Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1760469389547991502,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:598025,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b8e37784-9a86-489f-b4d0-ff87df16bd11 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 14 19:16:29 addons-082251 crio[820]: time="2025-10-14 19:16:29.548711133Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a7512d29-4313-4a21-861f-0b61f2da4793 name=/runtime.v1.RuntimeService/ListContainers
	Oct 14 19:16:29 addons-082251 crio[820]: time="2025-10-14 19:16:29.548781120Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a7512d29-4313-4a21-861f-0b61f2da4793 name=/runtime.v1.RuntimeService/ListContainers
	Oct 14 19:16:29 addons-082251 crio[820]: time="2025-10-14 19:16:29.549135681Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:00771d1df731a4e3c2cfe5ac499b33a40156618565443e9e29fabe13206489f8,PodSandboxId:06661c44331d02f07522e6c274180134c75ffc70621648fc0ef8d827019e4103,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:61e01287e546aac28a3f56839c136b31f590273f3b41187a36f46f6a03bbfe22,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5e7abcdd20216bbeedf1369529564ffd60f830ed3540c477938ca580b645dff5,State:CONTAINER_RUNNING,CreatedAt:1760469244638589947,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3734bf2f-d6f3-4fc4-a164-aa3a5ecee661,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0945d5214284b9b568eabf341bcaf43f6372e821c67bd52d2830b38e11822855,PodSandboxId:7ba9a86c0015a2481814eb7f5561ba4a1aaec65ced7c2d6f8d497f82bd9c0bd8,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1760469210212667520,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 82e239e4-46e4-4a5b-913a-74be19e087ce,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bea069b4992691f4b6ba4aed696d065845d7fced8ae40962585f29a635d80af2,PodSandboxId:4b3156b8bf196eb3a00ff9915290d79dd86ed5d17407c2959b3a476639f74546,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:1b044f6dcac3afbb59e05d98463f1dec6f3d3fb99940bc12ca5d80270358e3bd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c44d76c3213ea875be38abca61688c1173da6ee1815f1ce330a2d93add531e32,State:CONTAINER_RUNNING,CreatedAt:1760469178808620415,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-675c5ddd98-bxmmg,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 8a73f036-a6a2-4c06-bade-21ff639076ac,},Annotations:map[string]string{io.kubernetes.
container.hash: 36aef26,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:531aa741fe4a7eb4981a2a86a7162f9ac7b78b91aa6567df0ee44deee09cd805,PodSandboxId:9edf8f9443f2c7779800603a20db5c8486e1ee74c1f66009aeeb1dad3a5dbd52,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:3d671cf20a35cd94efc5dcd484970779eb21e7938c98fbc3673693b8a117cf39,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:08cfe302fe
afeabe4c2747ba112aa93917a7468cdd19a8835b48eb2ac88a7bf2,State:CONTAINER_EXITED,CreatedAt:1760469165478867697,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-77zpp,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: efd725df-301a-4260-92f0-d146ca773316,},Annotations:map[string]string{io.kubernetes.container.hash: 166f2edf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:437e3e64f7d50a72f1e209f7321be34c33627fc7a6cd7fb41b307397aa60d490,PodSandboxId:61d11c93a1452f93f6b835266ab066da5cc0e4c1d5b2ef5ae6ef645573530b89,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:3d671cf20a35cd94efc5dcd484970779eb21e7938c98fbc3673693b8a117cf39,Annotations:map[string]string{},UserSpecifiedImage:,R
untimeHandler:,},ImageRef:08cfe302feafeabe4c2747ba112aa93917a7468cdd19a8835b48eb2ac88a7bf2,State:CONTAINER_EXITED,CreatedAt:1760469165345617835,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-fn9j5,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 78bc62d5-61d0-4fdb-a2f1-47d2107905af,},Annotations:map[string]string{io.kubernetes.container.hash: 3193dfde,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:57c3ebde6479076d124aaf061b1ff7d57ae297cb4eadfe9b7359e02ef772f911,PodSandboxId:ab0392e4c5333c0696aa8c50d4b7eb69d81fdadf7b225936216a0e390c2d6ced,Metadata:&ContainerMetadata{Name:gadget,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/inspektor-gadget/inspektor-gadget@sha256:db9cb3dd78ffab71eb8746afcb57bd3859993cb150a76d8b7cebe79441c702cb,Annotations:map[string]s
tring{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38dca7434d5f28a7ced293ea76279adbabf08af32ee48a29bab2668b8ea7401f,State:CONTAINER_RUNNING,CreatedAt:1760469156901124243,Labels:map[string]string{io.kubernetes.container.name: gadget,io.kubernetes.pod.name: gadget-hqcbg,io.kubernetes.pod.namespace: gadget,io.kubernetes.pod.uid: 30556161-b0ef-471f-964f-a6eca37b15a1,},Annotations:map[string]string{io.kubernetes.container.hash: f68894e6,io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/cleanup\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:072f9f911b9dce7da1e169e224bda2a9280145ca691dd03ce873eaaff8194934,PodSandboxId:e109143b60b1c49c0e232686aea9f9ec71d3d2708bf15ad6a85a53211626bbd9,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-i
ngress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1760469143284320066,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 078e3d8d-9557-476e-bdb3-72041038eef4,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5890b48dc1784be7fb56dcbe06f1507853e32d8bd2fa7f582a295cb46b559ff4,PodSandboxId:eb99e4db790e1536fb4ee9b4fe33874b235051d338c5561
d0413895677a8801e,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1760469122581752201,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-wjxgm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3866b9b9-cfa5-423e-aadf-3969d88023ec,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c5bab87a6fcc122ce15fa8a001121690012344b70ec21f5b4cf05f488a2b80f4,PodSandboxId:fef8503
61af1b0fdfd2a873842ffff378363f5261f2ef20acda871ebd11da1ae,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1760469100294979259,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1426d20a-4d3a-4473-b6b5-e213b9eb7c6d,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e944fb0c52e32eee8eddc847d6a61fe9137c49176522042d9e90c5ab793a829,PodSandboxId:15241db503a39379aad
4f61ea71fb80d1bd9295426627f318b6c2327142291d3,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1760469093948274828,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-rpkbj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c31ec4cb-a1a3-45d4-bb9b-5f6ea1abac04,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"pr
otocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:721cdc801c079b4b6bc705898f6186ee113594b49f47f0bbb3ec3805c2708a64,PodSandboxId:9ebc2bd9d93acd968aba94da5213aef3bf47833b6ef12fd44f2a09dfcd949aab,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1760469092648625266,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rl7gc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 52a7f838-8ad0-4fa9-8bb4-a0b1d45eb94c,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.
container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:00c26868bff1a52273e6c9f88314b67904670d380cb420e49d9393ff47fb83f4,PodSandboxId:cd86488c7745a45bb9d3982abf8e865f569cedf768888cd0a0f4adc434f42c71,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1760469080949061075,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-082251,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a8f1830a9c0212cd752acd7e04b95ca1,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports:
[{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:82d16cf5c6b0868b906ec52fbea853e79b7acde576e85ddf86ec8e2041a2c6af,PodSandboxId:6e9b04d9adf4a00d3575c68d1496b1d41eddd7b9949bd4b0f96453e8a00bb448,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1760469080908695080,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-082251,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f707512f8e47a5a596e4abce62911
25e,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ccf835a80dedc99eefba9d637d20ec6dcd214642c450bee1433cce806d209f08,PodSandboxId:0c69d8cb76201e03267dd059ac3f672057d38c23b3be42f480b5d4eb5038bedb,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1760469080911094389,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-082251,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: a0e0d501350e2ff0d6ed28805531de36,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e72c7a6929986499ab7c51b27ffb0ce0410617e85fcec40399e0ed515a68d7dc,PodSandboxId:80f248871a70b700b7b697bf8d3f40e6e389500c8a3bfe2f25e33d1ae529775a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1760469080899039968,Labels:map[string]string{io.k
ubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-082251,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9db629f2938736e766cbeafb458918b1,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a7512d29-4313-4a21-861f-0b61f2da4793 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	00771d1df731a       docker.io/library/nginx@sha256:61e01287e546aac28a3f56839c136b31f590273f3b41187a36f46f6a03bbfe22                              2 minutes ago       Running             nginx                     0                   06661c44331d0       nginx
	0945d5214284b       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                          2 minutes ago       Running             busybox                   0                   7ba9a86c0015a       busybox
	bea069b499269       registry.k8s.io/ingress-nginx/controller@sha256:1b044f6dcac3afbb59e05d98463f1dec6f3d3fb99940bc12ca5d80270358e3bd             3 minutes ago       Running             controller                0                   4b3156b8bf196       ingress-nginx-controller-675c5ddd98-bxmmg
	531aa741fe4a7       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:3d671cf20a35cd94efc5dcd484970779eb21e7938c98fbc3673693b8a117cf39   3 minutes ago       Exited              patch                     0                   9edf8f9443f2c       ingress-nginx-admission-patch-77zpp
	437e3e64f7d50       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:3d671cf20a35cd94efc5dcd484970779eb21e7938c98fbc3673693b8a117cf39   3 minutes ago       Exited              create                    0                   61d11c93a1452       ingress-nginx-admission-create-fn9j5
	57c3ebde64790       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:db9cb3dd78ffab71eb8746afcb57bd3859993cb150a76d8b7cebe79441c702cb            3 minutes ago       Running             gadget                    0                   ab0392e4c5333       gadget-hqcbg
	072f9f911b9dc       docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7               4 minutes ago       Running             minikube-ingress-dns      0                   e109143b60b1c       kube-ingress-dns-minikube
	5890b48dc1784       docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f                     4 minutes ago       Running             amd-gpu-device-plugin     0                   eb99e4db790e1       amd-gpu-device-plugin-wjxgm
	c5bab87a6fcc1       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                             4 minutes ago       Running             storage-provisioner       0                   fef850361af1b       storage-provisioner
	5e944fb0c52e3       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                             4 minutes ago       Running             coredns                   0                   15241db503a39       coredns-66bc5c9577-rpkbj
	721cdc801c079       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                                             4 minutes ago       Running             kube-proxy                0                   9ebc2bd9d93ac       kube-proxy-rl7gc
	00c26868bff1a       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                                             5 minutes ago       Running             kube-scheduler            0                   cd86488c7745a       kube-scheduler-addons-082251
	ccf835a80dedc       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                                             5 minutes ago       Running             etcd                      0                   0c69d8cb76201       etcd-addons-082251
	82d16cf5c6b08       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                                             5 minutes ago       Running             kube-apiserver            0                   6e9b04d9adf4a       kube-apiserver-addons-082251
	e72c7a6929986       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                                             5 minutes ago       Running             kube-controller-manager   0                   80f248871a70b       kube-controller-manager-addons-082251
	
	
	==> coredns [5e944fb0c52e32eee8eddc847d6a61fe9137c49176522042d9e90c5ab793a829] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 1b226df79860026c6a52e67daa10d7f0d57ec5b023288ec00c5e05f93523c894564e15b91770d3a07ae1cfbe861d15b37d4a0027e69c546ab112970993a3b03b
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] Reloading
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	[INFO] Reloading complete
	[INFO] 10.244.0.27:44201 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000428011s
	[INFO] 10.244.0.27:47828 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000229857s
	
	
	==> describe nodes <==
	Name:               addons-082251
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-082251
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=93e78e130b0f4e4236c23940daf4ba8b68c76662
	                    minikube.k8s.io/name=addons-082251
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_14T19_11_28_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-082251
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 14 Oct 2025 19:11:23 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-082251
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 14 Oct 2025 19:16:22 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 14 Oct 2025 19:14:31 +0000   Tue, 14 Oct 2025 19:11:21 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 14 Oct 2025 19:14:31 +0000   Tue, 14 Oct 2025 19:11:21 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 14 Oct 2025 19:14:31 +0000   Tue, 14 Oct 2025 19:11:21 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 14 Oct 2025 19:14:31 +0000   Tue, 14 Oct 2025 19:11:28 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.214
	  Hostname:    addons-082251
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4008588Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4008588Ki
	  pods:               110
	System Info:
	  Machine ID:                 370e40dd634a4fc284c554395dc3dfe0
	  System UUID:                370e40dd-634a-4fc2-84c5-54395dc3dfe0
	  Boot ID:                    a662db1b-92c0-4ce9-ae26-a275cca8af14
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (14 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m3s
	  default                     hello-world-app-5d498dc89-kwrsq              0 (0%)        0 (0%)      0 (0%)           0 (0%)         1s
	  default                     nginx                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m30s
	  gadget                      gadget-hqcbg                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m49s
	  ingress-nginx               ingress-nginx-controller-675c5ddd98-bxmmg    100m (5%)     0 (0%)      90Mi (2%)        0 (0%)         4m48s
	  kube-system                 amd-gpu-device-plugin-wjxgm                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m54s
	  kube-system                 coredns-66bc5c9577-rpkbj                     100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     4m57s
	  kube-system                 etcd-addons-082251                           100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         5m3s
	  kube-system                 kube-apiserver-addons-082251                 250m (12%)    0 (0%)      0 (0%)           0 (0%)         5m4s
	  kube-system                 kube-controller-manager-addons-082251        200m (10%)    0 (0%)      0 (0%)           0 (0%)         5m2s
	  kube-system                 kube-ingress-dns-minikube                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m51s
	  kube-system                 kube-proxy-rl7gc                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m57s
	  kube-system                 kube-scheduler-addons-082251                 100m (5%)     0 (0%)      0 (0%)           0 (0%)         5m3s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m51s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  0 (0%)
	  memory             260Mi (6%)  170Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 4m56s  kube-proxy       
	  Normal  Starting                 5m2s   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  5m2s   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  5m2s   kubelet          Node addons-082251 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m2s   kubelet          Node addons-082251 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m2s   kubelet          Node addons-082251 status is now: NodeHasSufficientPID
	  Normal  NodeReady                5m1s   kubelet          Node addons-082251 status is now: NodeReady
	  Normal  RegisteredNode           4m58s  node-controller  Node addons-082251 event: Registered Node addons-082251 in Controller
	
	
	==> dmesg <==
	[  +0.045341] kauditd_printk_skb: 309 callbacks suppressed
	[ +14.224140] kauditd_printk_skb: 379 callbacks suppressed
	[Oct14 19:12] kauditd_printk_skb: 20 callbacks suppressed
	[  +5.007456] kauditd_printk_skb: 26 callbacks suppressed
	[  +5.977876] kauditd_printk_skb: 20 callbacks suppressed
	[  +7.642163] kauditd_printk_skb: 32 callbacks suppressed
	[  +6.999801] kauditd_printk_skb: 20 callbacks suppressed
	[  +0.000042] kauditd_printk_skb: 196 callbacks suppressed
	[  +6.853779] kauditd_printk_skb: 76 callbacks suppressed
	[Oct14 19:13] kauditd_printk_skb: 39 callbacks suppressed
	[  +5.985681] kauditd_printk_skb: 76 callbacks suppressed
	[  +0.000036] kauditd_printk_skb: 32 callbacks suppressed
	[ +13.074250] kauditd_printk_skb: 41 callbacks suppressed
	[  +5.885227] kauditd_printk_skb: 22 callbacks suppressed
	[  +4.717987] kauditd_printk_skb: 38 callbacks suppressed
	[  +4.142337] kauditd_printk_skb: 105 callbacks suppressed
	[Oct14 19:14] kauditd_printk_skb: 206 callbacks suppressed
	[  +3.323611] kauditd_printk_skb: 92 callbacks suppressed
	[  +2.671651] kauditd_printk_skb: 76 callbacks suppressed
	[  +1.019968] kauditd_printk_skb: 76 callbacks suppressed
	[  +7.444569] kauditd_printk_skb: 7 callbacks suppressed
	[ +10.435784] kauditd_printk_skb: 10 callbacks suppressed
	[  +0.000069] kauditd_printk_skb: 10 callbacks suppressed
	[  +6.841685] kauditd_printk_skb: 41 callbacks suppressed
	[Oct14 19:16] kauditd_printk_skb: 127 callbacks suppressed
	
	
	==> etcd [ccf835a80dedc99eefba9d637d20ec6dcd214642c450bee1433cce806d209f08] <==
	{"level":"info","ts":"2025-10-14T19:12:55.886649Z","caller":"traceutil/trace.go:172","msg":"trace[35230243] linearizableReadLoop","detail":"{readStateIndex:1128; appliedIndex:1128; }","duration":"150.721252ms","start":"2025-10-14T19:12:55.735911Z","end":"2025-10-14T19:12:55.886632Z","steps":["trace[35230243] 'read index received'  (duration: 150.714902ms)","trace[35230243] 'applied index is now lower than readState.Index'  (duration: 5.445µs)"],"step_count":2}
	{"level":"warn","ts":"2025-10-14T19:12:55.886772Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"150.833592ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-14T19:12:55.886833Z","caller":"traceutil/trace.go:172","msg":"trace[79207537] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1090; }","duration":"150.919592ms","start":"2025-10-14T19:12:55.735907Z","end":"2025-10-14T19:12:55.886827Z","steps":["trace[79207537] 'agreement among raft nodes before linearized reading'  (duration: 150.804026ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-14T19:12:55.887227Z","caller":"traceutil/trace.go:172","msg":"trace[3017174] transaction","detail":"{read_only:false; response_revision:1091; number_of_response:1; }","duration":"217.083286ms","start":"2025-10-14T19:12:55.670135Z","end":"2025-10-14T19:12:55.887218Z","steps":["trace[3017174] 'process raft request'  (duration: 216.97107ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-14T19:12:56.002077Z","caller":"traceutil/trace.go:172","msg":"trace[1600097690] transaction","detail":"{read_only:false; response_revision:1092; number_of_response:1; }","duration":"310.058709ms","start":"2025-10-14T19:12:55.692004Z","end":"2025-10-14T19:12:56.002062Z","steps":["trace[1600097690] 'process raft request'  (duration: 304.777761ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-14T19:12:56.002213Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-10-14T19:12:55.691980Z","time spent":"310.144113ms","remote":"127.0.0.1:35856","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":678,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/leases/kube-system/apiserver-ugk7srrcvwefusbnruk23bepim\" mod_revision:1032 > success:<request_put:<key:\"/registry/leases/kube-system/apiserver-ugk7srrcvwefusbnruk23bepim\" value_size:605 >> failure:<request_range:<key:\"/registry/leases/kube-system/apiserver-ugk7srrcvwefusbnruk23bepim\" > >"}
	{"level":"info","ts":"2025-10-14T19:12:58.065033Z","caller":"traceutil/trace.go:172","msg":"trace[1624519219] linearizableReadLoop","detail":"{readStateIndex:1137; appliedIndex:1137; }","duration":"111.859072ms","start":"2025-10-14T19:12:57.953136Z","end":"2025-10-14T19:12:58.064995Z","steps":["trace[1624519219] 'read index received'  (duration: 111.853977ms)","trace[1624519219] 'applied index is now lower than readState.Index'  (duration: 4.297µs)"],"step_count":2}
	{"level":"warn","ts":"2025-10-14T19:12:58.065169Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"112.012003ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-14T19:12:58.065196Z","caller":"traceutil/trace.go:172","msg":"trace[792501184] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1098; }","duration":"112.056911ms","start":"2025-10-14T19:12:57.953127Z","end":"2025-10-14T19:12:58.065184Z","steps":["trace[792501184] 'agreement among raft nodes before linearized reading'  (duration: 111.982435ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-14T19:12:58.065446Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"100.795847ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-14T19:12:58.065524Z","caller":"traceutil/trace.go:172","msg":"trace[1314301959] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1099; }","duration":"100.879062ms","start":"2025-10-14T19:12:57.964635Z","end":"2025-10-14T19:12:58.065514Z","steps":["trace[1314301959] 'agreement among raft nodes before linearized reading'  (duration: 100.782716ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-14T19:12:58.065550Z","caller":"traceutil/trace.go:172","msg":"trace[1190180463] transaction","detail":"{read_only:false; response_revision:1099; number_of_response:1; }","duration":"170.208272ms","start":"2025-10-14T19:12:57.895333Z","end":"2025-10-14T19:12:58.065542Z","steps":["trace[1190180463] 'process raft request'  (duration: 169.711866ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-14T19:13:14.019239Z","caller":"traceutil/trace.go:172","msg":"trace[403220050] transaction","detail":"{read_only:false; response_revision:1193; number_of_response:1; }","duration":"133.089789ms","start":"2025-10-14T19:13:13.886126Z","end":"2025-10-14T19:13:14.019215Z","steps":["trace[403220050] 'process raft request'  (duration: 132.997307ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-14T19:13:53.103515Z","caller":"traceutil/trace.go:172","msg":"trace[930238370] linearizableReadLoop","detail":"{readStateIndex:1449; appliedIndex:1449; }","duration":"138.462967ms","start":"2025-10-14T19:13:52.964990Z","end":"2025-10-14T19:13:53.103453Z","steps":["trace[930238370] 'read index received'  (duration: 138.455542ms)","trace[930238370] 'applied index is now lower than readState.Index'  (duration: 6.352µs)"],"step_count":2}
	{"level":"warn","ts":"2025-10-14T19:13:53.103675Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"138.67185ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-14T19:13:53.103707Z","caller":"traceutil/trace.go:172","msg":"trace[901461664] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1396; }","duration":"138.72983ms","start":"2025-10-14T19:13:52.964968Z","end":"2025-10-14T19:13:53.103698Z","steps":["trace[901461664] 'agreement among raft nodes before linearized reading'  (duration: 138.641639ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-14T19:13:53.104406Z","caller":"traceutil/trace.go:172","msg":"trace[378824357] transaction","detail":"{read_only:false; response_revision:1397; number_of_response:1; }","duration":"153.638815ms","start":"2025-10-14T19:13:52.950757Z","end":"2025-10-14T19:13:53.104396Z","steps":["trace[378824357] 'process raft request'  (duration: 153.514171ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-14T19:13:53.104836Z","caller":"traceutil/trace.go:172","msg":"trace[880057603] transaction","detail":"{read_only:false; response_revision:1398; number_of_response:1; }","duration":"119.171529ms","start":"2025-10-14T19:13:52.985658Z","end":"2025-10-14T19:13:53.104830Z","steps":["trace[880057603] 'process raft request'  (duration: 119.120025ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-14T19:14:04.341736Z","caller":"traceutil/trace.go:172","msg":"trace[1285704593] linearizableReadLoop","detail":"{readStateIndex:1567; appliedIndex:1567; }","duration":"139.352856ms","start":"2025-10-14T19:14:04.202365Z","end":"2025-10-14T19:14:04.341718Z","steps":["trace[1285704593] 'read index received'  (duration: 139.345696ms)","trace[1285704593] 'applied index is now lower than readState.Index'  (duration: 6.167µs)"],"step_count":2}
	{"level":"info","ts":"2025-10-14T19:14:04.341866Z","caller":"traceutil/trace.go:172","msg":"trace[1864737943] transaction","detail":"{read_only:false; response_revision:1511; number_of_response:1; }","duration":"150.801786ms","start":"2025-10-14T19:14:04.191054Z","end":"2025-10-14T19:14:04.341856Z","steps":["trace[1864737943] 'process raft request'  (duration: 150.697863ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-14T19:14:04.341893Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"139.491132ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-14T19:14:04.341913Z","caller":"traceutil/trace.go:172","msg":"trace[1770730069] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1510; }","duration":"139.547217ms","start":"2025-10-14T19:14:04.202360Z","end":"2025-10-14T19:14:04.341907Z","steps":["trace[1770730069] 'agreement among raft nodes before linearized reading'  (duration: 139.451534ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-14T19:14:04.515380Z","caller":"traceutil/trace.go:172","msg":"trace[1314177310] transaction","detail":"{read_only:false; response_revision:1512; number_of_response:1; }","duration":"161.204417ms","start":"2025-10-14T19:14:04.354162Z","end":"2025-10-14T19:14:04.515366Z","steps":["trace[1314177310] 'process raft request'  (duration: 157.38975ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-14T19:14:21.728804Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"176.961767ms","expected-duration":"100ms","prefix":"","request":"header:<ID:6697866275938163537 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/configmaps/yakd-dashboard/kube-root-ca.crt\" mod_revision:554 > success:<request_delete_range:<key:\"/registry/configmaps/yakd-dashboard/kube-root-ca.crt\" > > failure:<request_range:<key:\"/registry/configmaps/yakd-dashboard/kube-root-ca.crt\" > >>","response":"size:18"}
	{"level":"info","ts":"2025-10-14T19:14:21.731744Z","caller":"traceutil/trace.go:172","msg":"trace[94689080] transaction","detail":"{read_only:false; number_of_response:1; response_revision:1638; }","duration":"272.601653ms","start":"2025-10-14T19:14:21.458849Z","end":"2025-10-14T19:14:21.731450Z","steps":["trace[94689080] 'process raft request'  (duration: 92.78419ms)","trace[94689080] 'compare'  (duration: 175.600982ms)"],"step_count":2}
	
	
	==> kernel <==
	 19:16:29 up 5 min,  0 users,  load average: 0.26, 0.83, 0.47
	Linux addons-082251 6.6.95 #1 SMP PREEMPT_DYNAMIC Thu Sep 18 15:48:18 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [82d16cf5c6b0868b906ec52fbea853e79b7acde576e85ddf86ec8e2041a2c6af] <==
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E1014 19:12:30.582216       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.98.233.78:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.98.233.78:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.98.233.78:443: connect: connection refused" logger="UnhandledError"
	E1014 19:12:30.586776       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.98.233.78:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.98.233.78:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.98.233.78:443: connect: connection refused" logger="UnhandledError"
	I1014 19:12:30.646104       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1014 19:13:36.718102       1 conn.go:339] Error on socket receive: read tcp 192.168.39.214:8443->192.168.39.1:47200: use of closed network connection
	E1014 19:13:36.913291       1 conn.go:339] Error on socket receive: read tcp 192.168.39.214:8443->192.168.39.1:47212: use of closed network connection
	I1014 19:13:46.095893       1 alloc.go:328] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.102.144.0"}
	I1014 19:13:58.950128       1 controller.go:667] quota admission added evaluator for: ingresses.networking.k8s.io
	I1014 19:13:59.188163       1 alloc.go:328] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.103.145.34"}
	E1014 19:14:25.360079       1 authentication.go:75] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I1014 19:14:28.439920       1 controller.go:667] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I1014 19:14:31.609437       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	I1014 19:14:57.774595       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1014 19:14:57.774885       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1014 19:14:57.854662       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1014 19:14:57.855009       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1014 19:14:57.920782       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1014 19:14:57.920873       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1014 19:14:57.960189       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1014 19:14:57.960220       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W1014 19:14:58.927564       1 cacher.go:182] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W1014 19:14:58.960778       1 cacher.go:182] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	W1014 19:14:59.073910       1 cacher.go:182] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	I1014 19:16:28.263155       1 alloc.go:328] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.104.86.22"}
	
	
	==> kube-controller-manager [e72c7a6929986499ab7c51b27ffb0ce0410617e85fcec40399e0ed515a68d7dc] <==
	E1014 19:15:02.775285       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1014 19:15:06.517304       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1014 19:15:06.518275       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1014 19:15:07.450322       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1014 19:15:07.451271       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1014 19:15:08.051668       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1014 19:15:08.052854       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1014 19:15:15.959064       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1014 19:15:15.960084       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1014 19:15:18.236359       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1014 19:15:18.237452       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1014 19:15:20.680590       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1014 19:15:20.681807       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1014 19:15:37.887660       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1014 19:15:37.888719       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1014 19:15:40.418070       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1014 19:15:40.419958       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1014 19:15:41.733530       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1014 19:15:41.734873       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1014 19:16:19.060026       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1014 19:16:19.061185       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1014 19:16:24.600260       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1014 19:16:24.601243       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1014 19:16:27.852937       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1014 19:16:27.854628       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	
	
	==> kube-proxy [721cdc801c079b4b6bc705898f6186ee113594b49f47f0bbb3ec3805c2708a64] <==
	I1014 19:11:32.899908       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1014 19:11:33.005620       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1014 19:11:33.005712       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.214"]
	E1014 19:11:33.005823       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1014 19:11:33.289223       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1014 19:11:33.289343       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1014 19:11:33.289407       1 server_linux.go:132] "Using iptables Proxier"
	I1014 19:11:33.308178       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1014 19:11:33.308530       1 server.go:527] "Version info" version="v1.34.1"
	I1014 19:11:33.308543       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1014 19:11:33.318071       1 config.go:200] "Starting service config controller"
	I1014 19:11:33.318099       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1014 19:11:33.318119       1 config.go:106] "Starting endpoint slice config controller"
	I1014 19:11:33.318122       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1014 19:11:33.318201       1 config.go:403] "Starting serviceCIDR config controller"
	I1014 19:11:33.318208       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1014 19:11:33.321936       1 config.go:309] "Starting node config controller"
	I1014 19:11:33.321949       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1014 19:11:33.419055       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1014 19:11:33.419561       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1014 19:11:33.419606       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1014 19:11:33.428597       1 shared_informer.go:356] "Caches are synced" controller="node config"
	
	
	==> kube-scheduler [00c26868bff1a52273e6c9f88314b67904670d380cb420e49d9393ff47fb83f4] <==
	E1014 19:11:23.504000       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1014 19:11:23.504038       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1014 19:11:23.504225       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1014 19:11:23.504287       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1014 19:11:23.504342       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1014 19:11:23.504393       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1014 19:11:23.504447       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1014 19:11:23.504551       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1014 19:11:23.506731       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1014 19:11:23.506805       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1014 19:11:23.506879       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1014 19:11:23.507009       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1014 19:11:24.336438       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1014 19:11:24.355950       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1014 19:11:24.362163       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1014 19:11:24.456317       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1014 19:11:24.480014       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1014 19:11:24.549629       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1014 19:11:24.552547       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1014 19:11:24.643631       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1014 19:11:24.647154       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1014 19:11:24.691531       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1014 19:11:24.802622       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1014 19:11:25.068814       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	I1014 19:11:27.791620       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 14 19:15:00 addons-082251 kubelet[1496]: I1014 19:15:00.899157    1496 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4c37afd63c9c5072e89c89efe79e25c7d8754f6ab53099a36661c3c22a9b224c"} err="failed to get container status \"4c37afd63c9c5072e89c89efe79e25c7d8754f6ab53099a36661c3c22a9b224c\": rpc error: code = NotFound desc = could not find container \"4c37afd63c9c5072e89c89efe79e25c7d8754f6ab53099a36661c3c22a9b224c\": container with ID starting with 4c37afd63c9c5072e89c89efe79e25c7d8754f6ab53099a36661c3c22a9b224c not found: ID does not exist"
	Oct 14 19:15:01 addons-082251 kubelet[1496]: I1014 19:15:01.510057    1496 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="405d98dc-b741-40a2-a0f0-e9af7e01fbd5" path="/var/lib/kubelet/pods/405d98dc-b741-40a2-a0f0-e9af7e01fbd5/volumes"
	Oct 14 19:15:01 addons-082251 kubelet[1496]: I1014 19:15:01.511056    1496 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7c6f7acd-a506-4ac0-a48b-22e7896ae5c2" path="/var/lib/kubelet/pods/7c6f7acd-a506-4ac0-a48b-22e7896ae5c2/volumes"
	Oct 14 19:15:01 addons-082251 kubelet[1496]: I1014 19:15:01.511876    1496 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ee3535a7-8a38-4b3e-af2d-fe5bdca9a8f9" path="/var/lib/kubelet/pods/ee3535a7-8a38-4b3e-af2d-fe5bdca9a8f9/volumes"
	Oct 14 19:15:07 addons-082251 kubelet[1496]: E1014 19:15:07.720978    1496 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1760469307720274653  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598025}  inodes_used:{value:201}}"
	Oct 14 19:15:07 addons-082251 kubelet[1496]: E1014 19:15:07.721005    1496 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1760469307720274653  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598025}  inodes_used:{value:201}}"
	Oct 14 19:15:17 addons-082251 kubelet[1496]: E1014 19:15:17.724095    1496 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1760469317723419529  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598025}  inodes_used:{value:201}}"
	Oct 14 19:15:17 addons-082251 kubelet[1496]: E1014 19:15:17.724121    1496 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1760469317723419529  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598025}  inodes_used:{value:201}}"
	Oct 14 19:15:27 addons-082251 kubelet[1496]: E1014 19:15:27.726401    1496 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1760469327725997939  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598025}  inodes_used:{value:201}}"
	Oct 14 19:15:27 addons-082251 kubelet[1496]: E1014 19:15:27.726424    1496 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1760469327725997939  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598025}  inodes_used:{value:201}}"
	Oct 14 19:15:37 addons-082251 kubelet[1496]: E1014 19:15:37.728882    1496 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1760469337728413063  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598025}  inodes_used:{value:201}}"
	Oct 14 19:15:37 addons-082251 kubelet[1496]: E1014 19:15:37.728912    1496 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1760469337728413063  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598025}  inodes_used:{value:201}}"
	Oct 14 19:15:47 addons-082251 kubelet[1496]: E1014 19:15:47.731751    1496 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1760469347731193664  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598025}  inodes_used:{value:201}}"
	Oct 14 19:15:47 addons-082251 kubelet[1496]: E1014 19:15:47.731813    1496 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1760469347731193664  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598025}  inodes_used:{value:201}}"
	Oct 14 19:15:51 addons-082251 kubelet[1496]: I1014 19:15:51.506114    1496 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/amd-gpu-device-plugin-wjxgm" secret="" err="secret \"gcp-auth\" not found"
	Oct 14 19:15:57 addons-082251 kubelet[1496]: E1014 19:15:57.735931    1496 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1760469357735246809  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598025}  inodes_used:{value:201}}"
	Oct 14 19:15:57 addons-082251 kubelet[1496]: E1014 19:15:57.736032    1496 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1760469357735246809  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598025}  inodes_used:{value:201}}"
	Oct 14 19:16:06 addons-082251 kubelet[1496]: I1014 19:16:06.505735    1496 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Oct 14 19:16:07 addons-082251 kubelet[1496]: E1014 19:16:07.739041    1496 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1760469367738435319  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598025}  inodes_used:{value:201}}"
	Oct 14 19:16:07 addons-082251 kubelet[1496]: E1014 19:16:07.739066    1496 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1760469367738435319  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598025}  inodes_used:{value:201}}"
	Oct 14 19:16:17 addons-082251 kubelet[1496]: E1014 19:16:17.742033    1496 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1760469377741613004  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598025}  inodes_used:{value:201}}"
	Oct 14 19:16:17 addons-082251 kubelet[1496]: E1014 19:16:17.742079    1496 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1760469377741613004  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598025}  inodes_used:{value:201}}"
	Oct 14 19:16:27 addons-082251 kubelet[1496]: E1014 19:16:27.744842    1496 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1760469387744337244  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598025}  inodes_used:{value:201}}"
	Oct 14 19:16:27 addons-082251 kubelet[1496]: E1014 19:16:27.744871    1496 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1760469387744337244  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598025}  inodes_used:{value:201}}"
	Oct 14 19:16:28 addons-082251 kubelet[1496]: I1014 19:16:28.257609    1496 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k2kpv\" (UniqueName: \"kubernetes.io/projected/544a6372-1330-4582-899c-4095635e7e74-kube-api-access-k2kpv\") pod \"hello-world-app-5d498dc89-kwrsq\" (UID: \"544a6372-1330-4582-899c-4095635e7e74\") " pod="default/hello-world-app-5d498dc89-kwrsq"
	
	
	==> storage-provisioner [c5bab87a6fcc122ce15fa8a001121690012344b70ec21f5b4cf05f488a2b80f4] <==
	W1014 19:16:05.264697       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1014 19:16:07.268123       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1014 19:16:07.273837       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1014 19:16:09.277036       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1014 19:16:09.285713       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1014 19:16:11.288927       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1014 19:16:11.294663       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1014 19:16:13.298379       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1014 19:16:13.303518       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1014 19:16:15.307263       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1014 19:16:15.312960       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1014 19:16:17.316540       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1014 19:16:17.321527       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1014 19:16:19.325639       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1014 19:16:19.331748       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1014 19:16:21.337518       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1014 19:16:21.345328       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1014 19:16:23.348685       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1014 19:16:23.356994       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1014 19:16:25.360597       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1014 19:16:25.368295       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1014 19:16:27.372945       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1014 19:16:27.378842       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1014 19:16:29.383450       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1014 19:16:29.390529       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-082251 -n addons-082251
helpers_test.go:269: (dbg) Run:  kubectl --context addons-082251 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: hello-world-app-5d498dc89-kwrsq ingress-nginx-admission-create-fn9j5 ingress-nginx-admission-patch-77zpp
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-082251 describe pod hello-world-app-5d498dc89-kwrsq ingress-nginx-admission-create-fn9j5 ingress-nginx-admission-patch-77zpp
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-082251 describe pod hello-world-app-5d498dc89-kwrsq ingress-nginx-admission-create-fn9j5 ingress-nginx-admission-patch-77zpp: exit status 1 (79.274816ms)

                                                
                                                
-- stdout --
	Name:             hello-world-app-5d498dc89-kwrsq
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-082251/192.168.39.214
	Start Time:       Tue, 14 Oct 2025 19:16:28 +0000
	Labels:           app=hello-world-app
	                  pod-template-hash=5d498dc89
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/hello-world-app-5d498dc89
	Containers:
	  hello-world-app:
	    Container ID:   
	    Image:          docker.io/kicbase/echo-server:1.0
	    Image ID:       
	    Port:           8080/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ContainerCreating
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-k2kpv (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-k2kpv:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  2s    default-scheduler  Successfully assigned default/hello-world-app-5d498dc89-kwrsq to addons-082251
	  Normal  Pulling    2s    kubelet            Pulling image "docker.io/kicbase/echo-server:1.0"

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-fn9j5" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-77zpp" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-082251 describe pod hello-world-app-5d498dc89-kwrsq ingress-nginx-admission-create-fn9j5 ingress-nginx-admission-patch-77zpp: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-082251 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-082251 addons disable ingress-dns --alsologtostderr -v=1: (1.739197086s)
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-082251 addons disable ingress --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-082251 addons disable ingress --alsologtostderr -v=1: (7.745870011s)
--- FAIL: TestAddons/parallel/Ingress (161.56s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (2.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-416610 image ls --format short --alsologtostderr
functional_test.go:276: (dbg) Done: out/minikube-linux-amd64 -p functional-416610 image ls --format short --alsologtostderr: (2.312699165s)
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-416610 image ls --format short --alsologtostderr:

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-416610 image ls --format short --alsologtostderr:
I1014 19:21:44.335773  377866 out.go:360] Setting OutFile to fd 1 ...
I1014 19:21:44.336101  377866 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1014 19:21:44.336115  377866 out.go:374] Setting ErrFile to fd 2...
I1014 19:21:44.336122  377866 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1014 19:21:44.336439  377866 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-364627/.minikube/bin
I1014 19:21:44.337291  377866 config.go:182] Loaded profile config "functional-416610": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1014 19:21:44.337469  377866 config.go:182] Loaded profile config "functional-416610": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1014 19:21:44.338033  377866 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1014 19:21:44.338121  377866 main.go:141] libmachine: Launching plugin server for driver kvm2
I1014 19:21:44.353051  377866 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33049
I1014 19:21:44.353699  377866 main.go:141] libmachine: () Calling .GetVersion
I1014 19:21:44.354389  377866 main.go:141] libmachine: Using API Version  1
I1014 19:21:44.354424  377866 main.go:141] libmachine: () Calling .SetConfigRaw
I1014 19:21:44.354887  377866 main.go:141] libmachine: () Calling .GetMachineName
I1014 19:21:44.355146  377866 main.go:141] libmachine: (functional-416610) Calling .GetState
I1014 19:21:44.357596  377866 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1014 19:21:44.357647  377866 main.go:141] libmachine: Launching plugin server for driver kvm2
I1014 19:21:44.372787  377866 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38053
I1014 19:21:44.373231  377866 main.go:141] libmachine: () Calling .GetVersion
I1014 19:21:44.373840  377866 main.go:141] libmachine: Using API Version  1
I1014 19:21:44.373867  377866 main.go:141] libmachine: () Calling .SetConfigRaw
I1014 19:21:44.374325  377866 main.go:141] libmachine: () Calling .GetMachineName
I1014 19:21:44.374571  377866 main.go:141] libmachine: (functional-416610) Calling .DriverName
I1014 19:21:44.374819  377866 ssh_runner.go:195] Run: systemctl --version
I1014 19:21:44.374851  377866 main.go:141] libmachine: (functional-416610) Calling .GetSSHHostname
I1014 19:21:44.378071  377866 main.go:141] libmachine: (functional-416610) DBG | domain functional-416610 has defined MAC address 52:54:00:08:b4:93 in network mk-functional-416610
I1014 19:21:44.378613  377866 main.go:141] libmachine: (functional-416610) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:08:b4:93", ip: ""} in network mk-functional-416610: {Iface:virbr1 ExpiryTime:2025-10-14 20:19:11 +0000 UTC Type:0 Mac:52:54:00:08:b4:93 Iaid: IPaddr:192.168.39.139 Prefix:24 Hostname:functional-416610 Clientid:01:52:54:00:08:b4:93}
I1014 19:21:44.378643  377866 main.go:141] libmachine: (functional-416610) DBG | domain functional-416610 has defined IP address 192.168.39.139 and MAC address 52:54:00:08:b4:93 in network mk-functional-416610
I1014 19:21:44.378826  377866 main.go:141] libmachine: (functional-416610) Calling .GetSSHPort
I1014 19:21:44.379002  377866 main.go:141] libmachine: (functional-416610) Calling .GetSSHKeyPath
I1014 19:21:44.379157  377866 main.go:141] libmachine: (functional-416610) Calling .GetSSHUsername
I1014 19:21:44.379261  377866 sshutil.go:53] new ssh client: &{IP:192.168.39.139 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21409-364627/.minikube/machines/functional-416610/id_rsa Username:docker}
I1014 19:21:44.494880  377866 ssh_runner.go:195] Run: sudo crictl images --output json
I1014 19:21:46.588543  377866 ssh_runner.go:235] Completed: sudo crictl images --output json: (2.093606397s)
W1014 19:21:46.588666  377866 cache_images.go:735] Failed to list images for profile functional-416610 crictl images: sudo crictl images --output json: Process exited with status 1
stdout:

                                                
                                                
stderr:
E1014 19:21:46.581339    9071 remote_image.go:128] "ListImages with filter from image service failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" filter="&ImageFilter{Image:&ImageSpec{Image:,Annotations:map[string]string{},UserSpecifiedImage:,},}"
time="2025-10-14T19:21:46Z" level=fatal msg="listing images: rpc error: code = DeadlineExceeded desc = context deadline exceeded"
I1014 19:21:46.588744  377866 main.go:141] libmachine: Making call to close driver server
I1014 19:21:46.588764  377866 main.go:141] libmachine: (functional-416610) Calling .Close
I1014 19:21:46.589117  377866 main.go:141] libmachine: Successfully made call to close driver server
I1014 19:21:46.589133  377866 main.go:141] libmachine: Making call to close connection to plugin binary
I1014 19:21:46.589143  377866 main.go:141] libmachine: Making call to close driver server
I1014 19:21:46.589151  377866 main.go:141] libmachine: (functional-416610) Calling .Close
I1014 19:21:46.589454  377866 main.go:141] libmachine: Successfully made call to close driver server
I1014 19:21:46.589473  377866 main.go:141] libmachine: Making call to close connection to plugin binary
functional_test.go:290: expected registry.k8s.io/pause to be listed with minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageListShort (2.31s)

                                                
                                    
x
+
TestPreload (159.27s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:43: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-020721 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.32.0
E1014 20:01:22.790641  368634 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/functional-416610/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:43: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-020721 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.32.0: (1m30.466160524s)
preload_test.go:51: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-020721 image pull gcr.io/k8s-minikube/busybox
preload_test.go:51: (dbg) Done: out/minikube-linux-amd64 -p test-preload-020721 image pull gcr.io/k8s-minikube/busybox: (3.265615917s)
preload_test.go:57: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-020721
preload_test.go:57: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-020721: (6.854541272s)
preload_test.go:65: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-020721 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
preload_test.go:65: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-020721 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (55.78678715s)
preload_test.go:70: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-020721 image list
preload_test.go:75: Expected to find gcr.io/k8s-minikube/busybox in image list output, instead got 
-- stdout --
	registry.k8s.io/pause:3.10
	registry.k8s.io/kube-scheduler:v1.32.0
	registry.k8s.io/kube-proxy:v1.32.0
	registry.k8s.io/kube-controller-manager:v1.32.0
	registry.k8s.io/kube-apiserver:v1.32.0
	registry.k8s.io/etcd:3.5.16-0
	registry.k8s.io/coredns/coredns:v1.11.3
	gcr.io/k8s-minikube/storage-provisioner:v5
	docker.io/kindest/kindnetd:v20241108-5c6d2daf

                                                
                                                
-- /stdout --
panic.go:636: *** TestPreload FAILED at 2025-10-14 20:02:30.258495257 +0000 UTC m=+3134.724641033
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPreload]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-020721 -n test-preload-020721
helpers_test.go:252: <<< TestPreload FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPreload]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-020721 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p test-preload-020721 logs -n 25: (1.052377315s)
helpers_test.go:260: TestPreload logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                        ARGS                                                                                         │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ multinode-078519 ssh -n multinode-078519-m03 sudo cat /home/docker/cp-test.txt                                                                                                      │ multinode-078519     │ jenkins │ v1.37.0 │ 14 Oct 25 19:49 UTC │ 14 Oct 25 19:49 UTC │
	│ ssh     │ multinode-078519 ssh -n multinode-078519 sudo cat /home/docker/cp-test_multinode-078519-m03_multinode-078519.txt                                                                    │ multinode-078519     │ jenkins │ v1.37.0 │ 14 Oct 25 19:49 UTC │ 14 Oct 25 19:49 UTC │
	│ cp      │ multinode-078519 cp multinode-078519-m03:/home/docker/cp-test.txt multinode-078519-m02:/home/docker/cp-test_multinode-078519-m03_multinode-078519-m02.txt                           │ multinode-078519     │ jenkins │ v1.37.0 │ 14 Oct 25 19:49 UTC │ 14 Oct 25 19:49 UTC │
	│ ssh     │ multinode-078519 ssh -n multinode-078519-m03 sudo cat /home/docker/cp-test.txt                                                                                                      │ multinode-078519     │ jenkins │ v1.37.0 │ 14 Oct 25 19:49 UTC │ 14 Oct 25 19:49 UTC │
	│ ssh     │ multinode-078519 ssh -n multinode-078519-m02 sudo cat /home/docker/cp-test_multinode-078519-m03_multinode-078519-m02.txt                                                            │ multinode-078519     │ jenkins │ v1.37.0 │ 14 Oct 25 19:49 UTC │ 14 Oct 25 19:49 UTC │
	│ node    │ multinode-078519 node stop m03                                                                                                                                                      │ multinode-078519     │ jenkins │ v1.37.0 │ 14 Oct 25 19:49 UTC │ 14 Oct 25 19:49 UTC │
	│ node    │ multinode-078519 node start m03 -v=5 --alsologtostderr                                                                                                                              │ multinode-078519     │ jenkins │ v1.37.0 │ 14 Oct 25 19:49 UTC │ 14 Oct 25 19:50 UTC │
	│ node    │ list -p multinode-078519                                                                                                                                                            │ multinode-078519     │ jenkins │ v1.37.0 │ 14 Oct 25 19:50 UTC │                     │
	│ stop    │ -p multinode-078519                                                                                                                                                                 │ multinode-078519     │ jenkins │ v1.37.0 │ 14 Oct 25 19:50 UTC │ 14 Oct 25 19:52 UTC │
	│ start   │ -p multinode-078519 --wait=true -v=5 --alsologtostderr                                                                                                                              │ multinode-078519     │ jenkins │ v1.37.0 │ 14 Oct 25 19:52 UTC │ 14 Oct 25 19:54 UTC │
	│ node    │ list -p multinode-078519                                                                                                                                                            │ multinode-078519     │ jenkins │ v1.37.0 │ 14 Oct 25 19:54 UTC │                     │
	│ node    │ multinode-078519 node delete m03                                                                                                                                                    │ multinode-078519     │ jenkins │ v1.37.0 │ 14 Oct 25 19:54 UTC │ 14 Oct 25 19:54 UTC │
	│ stop    │ multinode-078519 stop                                                                                                                                                               │ multinode-078519     │ jenkins │ v1.37.0 │ 14 Oct 25 19:54 UTC │ 14 Oct 25 19:57 UTC │
	│ start   │ -p multinode-078519 --wait=true -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                                          │ multinode-078519     │ jenkins │ v1.37.0 │ 14 Oct 25 19:57 UTC │ 14 Oct 25 19:59 UTC │
	│ node    │ list -p multinode-078519                                                                                                                                                            │ multinode-078519     │ jenkins │ v1.37.0 │ 14 Oct 25 19:59 UTC │                     │
	│ start   │ -p multinode-078519-m02 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                                                                         │ multinode-078519-m02 │ jenkins │ v1.37.0 │ 14 Oct 25 19:59 UTC │                     │
	│ start   │ -p multinode-078519-m03 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                                                                         │ multinode-078519-m03 │ jenkins │ v1.37.0 │ 14 Oct 25 19:59 UTC │ 14 Oct 25 19:59 UTC │
	│ node    │ add -p multinode-078519                                                                                                                                                             │ multinode-078519     │ jenkins │ v1.37.0 │ 14 Oct 25 19:59 UTC │                     │
	│ delete  │ -p multinode-078519-m03                                                                                                                                                             │ multinode-078519-m03 │ jenkins │ v1.37.0 │ 14 Oct 25 19:59 UTC │ 14 Oct 25 19:59 UTC │
	│ delete  │ -p multinode-078519                                                                                                                                                                 │ multinode-078519     │ jenkins │ v1.37.0 │ 14 Oct 25 19:59 UTC │ 14 Oct 25 19:59 UTC │
	│ start   │ -p test-preload-020721 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.32.0 │ test-preload-020721  │ jenkins │ v1.37.0 │ 14 Oct 25 19:59 UTC │ 14 Oct 25 20:01 UTC │
	│ image   │ test-preload-020721 image pull gcr.io/k8s-minikube/busybox                                                                                                                          │ test-preload-020721  │ jenkins │ v1.37.0 │ 14 Oct 25 20:01 UTC │ 14 Oct 25 20:01 UTC │
	│ stop    │ -p test-preload-020721                                                                                                                                                              │ test-preload-020721  │ jenkins │ v1.37.0 │ 14 Oct 25 20:01 UTC │ 14 Oct 25 20:01 UTC │
	│ start   │ -p test-preload-020721 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                         │ test-preload-020721  │ jenkins │ v1.37.0 │ 14 Oct 25 20:01 UTC │ 14 Oct 25 20:02 UTC │
	│ image   │ test-preload-020721 image list                                                                                                                                                      │ test-preload-020721  │ jenkins │ v1.37.0 │ 14 Oct 25 20:02 UTC │ 14 Oct 25 20:02 UTC │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/14 20:01:34
	Running on machine: ubuntu-20-agent-6
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1014 20:01:34.298008  399048 out.go:360] Setting OutFile to fd 1 ...
	I1014 20:01:34.298250  399048 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1014 20:01:34.298261  399048 out.go:374] Setting ErrFile to fd 2...
	I1014 20:01:34.298267  399048 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1014 20:01:34.298520  399048 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-364627/.minikube/bin
	I1014 20:01:34.299006  399048 out.go:368] Setting JSON to false
	I1014 20:01:34.300020  399048 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":6237,"bootTime":1760465857,"procs":181,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1014 20:01:34.300118  399048 start.go:141] virtualization: kvm guest
	I1014 20:01:34.302298  399048 out.go:179] * [test-preload-020721] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1014 20:01:34.303536  399048 out.go:179]   - MINIKUBE_LOCATION=21409
	I1014 20:01:34.303579  399048 notify.go:220] Checking for updates...
	I1014 20:01:34.305864  399048 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1014 20:01:34.307029  399048 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21409-364627/kubeconfig
	I1014 20:01:34.308100  399048 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21409-364627/.minikube
	I1014 20:01:34.312504  399048 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1014 20:01:34.313695  399048 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1014 20:01:34.315239  399048 config.go:182] Loaded profile config "test-preload-020721": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1014 20:01:34.315684  399048 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 20:01:34.315756  399048 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 20:01:34.329013  399048 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37089
	I1014 20:01:34.329539  399048 main.go:141] libmachine: () Calling .GetVersion
	I1014 20:01:34.330034  399048 main.go:141] libmachine: Using API Version  1
	I1014 20:01:34.330057  399048 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 20:01:34.330470  399048 main.go:141] libmachine: () Calling .GetMachineName
	I1014 20:01:34.330671  399048 main.go:141] libmachine: (test-preload-020721) Calling .DriverName
	I1014 20:01:34.332247  399048 out.go:179] * Kubernetes 1.34.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.34.1
	I1014 20:01:34.333389  399048 driver.go:421] Setting default libvirt URI to qemu:///system
	I1014 20:01:34.333706  399048 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 20:01:34.333750  399048 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 20:01:34.347172  399048 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33875
	I1014 20:01:34.347751  399048 main.go:141] libmachine: () Calling .GetVersion
	I1014 20:01:34.348343  399048 main.go:141] libmachine: Using API Version  1
	I1014 20:01:34.348372  399048 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 20:01:34.348752  399048 main.go:141] libmachine: () Calling .GetMachineName
	I1014 20:01:34.348953  399048 main.go:141] libmachine: (test-preload-020721) Calling .DriverName
	I1014 20:01:34.383043  399048 out.go:179] * Using the kvm2 driver based on existing profile
	I1014 20:01:34.384095  399048 start.go:305] selected driver: kvm2
	I1014 20:01:34.384108  399048 start.go:925] validating driver "kvm2" against &{Name:test-preload-020721 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.32.0 ClusterName:test-preload-020721 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.188 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:2621
44 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1014 20:01:34.384226  399048 start.go:936] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1014 20:01:34.385004  399048 install.go:66] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1014 20:01:34.385116  399048 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21409-364627/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1014 20:01:34.399473  399048 install.go:163] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I1014 20:01:34.399503  399048 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21409-364627/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1014 20:01:34.414305  399048 install.go:163] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I1014 20:01:34.414734  399048 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1014 20:01:34.414763  399048 cni.go:84] Creating CNI manager for ""
	I1014 20:01:34.414821  399048 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1014 20:01:34.414890  399048 start.go:349] cluster config:
	{Name:test-preload-020721 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:test-preload-020721 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.188 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disa
bleOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1014 20:01:34.415032  399048 iso.go:125] acquiring lock: {Name:mk340b9c7aeea9c9c1eaad0b6560c7e40df5ab00 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1014 20:01:34.416739  399048 out.go:179] * Starting "test-preload-020721" primary control-plane node in "test-preload-020721" cluster
	I1014 20:01:34.417913  399048 preload.go:183] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I1014 20:01:34.802760  399048 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.32.0/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4
	I1014 20:01:34.802800  399048 cache.go:58] Caching tarball of preloaded images
	I1014 20:01:34.802993  399048 preload.go:183] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I1014 20:01:34.804906  399048 out.go:179] * Downloading Kubernetes v1.32.0 preload ...
	I1014 20:01:34.806163  399048 preload.go:313] getting checksum for preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4 from gcs api...
	I1014 20:01:35.276809  399048 preload.go:290] Got checksum from GCS API "2acdb4dde52794f2167c79dcee7507ae"
	I1014 20:01:35.276859  399048 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.32.0/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:2acdb4dde52794f2167c79dcee7507ae -> /home/jenkins/minikube-integration/21409-364627/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4
	I1014 20:01:44.662904  399048 cache.go:61] Finished verifying existence of preloaded tar for v1.32.0 on crio
	I1014 20:01:44.663058  399048 profile.go:143] Saving config to /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/test-preload-020721/config.json ...
	I1014 20:01:44.663294  399048 start.go:360] acquireMachinesLock for test-preload-020721: {Name:mk52d449be3ec71c122454fdb0aeda759b1051fc Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1014 20:01:44.663382  399048 start.go:364] duration metric: took 47.03µs to acquireMachinesLock for "test-preload-020721"
	I1014 20:01:44.663409  399048 start.go:96] Skipping create...Using existing machine configuration
	I1014 20:01:44.663414  399048 fix.go:54] fixHost starting: 
	I1014 20:01:44.663701  399048 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 20:01:44.663744  399048 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 20:01:44.677465  399048 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41013
	I1014 20:01:44.677936  399048 main.go:141] libmachine: () Calling .GetVersion
	I1014 20:01:44.678453  399048 main.go:141] libmachine: Using API Version  1
	I1014 20:01:44.678480  399048 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 20:01:44.678852  399048 main.go:141] libmachine: () Calling .GetMachineName
	I1014 20:01:44.679094  399048 main.go:141] libmachine: (test-preload-020721) Calling .DriverName
	I1014 20:01:44.679252  399048 main.go:141] libmachine: (test-preload-020721) Calling .GetState
	I1014 20:01:44.681162  399048 fix.go:112] recreateIfNeeded on test-preload-020721: state=Stopped err=<nil>
	I1014 20:01:44.681200  399048 main.go:141] libmachine: (test-preload-020721) Calling .DriverName
	W1014 20:01:44.681370  399048 fix.go:138] unexpected machine state, will restart: <nil>
	I1014 20:01:44.683443  399048 out.go:252] * Restarting existing kvm2 VM for "test-preload-020721" ...
	I1014 20:01:44.683478  399048 main.go:141] libmachine: (test-preload-020721) Calling .Start
	I1014 20:01:44.683699  399048 main.go:141] libmachine: (test-preload-020721) starting domain...
	I1014 20:01:44.683725  399048 main.go:141] libmachine: (test-preload-020721) ensuring networks are active...
	I1014 20:01:44.684631  399048 main.go:141] libmachine: (test-preload-020721) Ensuring network default is active
	I1014 20:01:44.685053  399048 main.go:141] libmachine: (test-preload-020721) Ensuring network mk-test-preload-020721 is active
	I1014 20:01:44.685520  399048 main.go:141] libmachine: (test-preload-020721) getting domain XML...
	I1014 20:01:44.686634  399048 main.go:141] libmachine: (test-preload-020721) DBG | starting domain XML:
	I1014 20:01:44.686652  399048 main.go:141] libmachine: (test-preload-020721) DBG | <domain type='kvm'>
	I1014 20:01:44.686663  399048 main.go:141] libmachine: (test-preload-020721) DBG |   <name>test-preload-020721</name>
	I1014 20:01:44.686672  399048 main.go:141] libmachine: (test-preload-020721) DBG |   <uuid>53224ae3-587a-4322-a467-b5facc589ec4</uuid>
	I1014 20:01:44.686682  399048 main.go:141] libmachine: (test-preload-020721) DBG |   <memory unit='KiB'>3145728</memory>
	I1014 20:01:44.686694  399048 main.go:141] libmachine: (test-preload-020721) DBG |   <currentMemory unit='KiB'>3145728</currentMemory>
	I1014 20:01:44.686703  399048 main.go:141] libmachine: (test-preload-020721) DBG |   <vcpu placement='static'>2</vcpu>
	I1014 20:01:44.686715  399048 main.go:141] libmachine: (test-preload-020721) DBG |   <os>
	I1014 20:01:44.686733  399048 main.go:141] libmachine: (test-preload-020721) DBG |     <type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
	I1014 20:01:44.686747  399048 main.go:141] libmachine: (test-preload-020721) DBG |     <boot dev='cdrom'/>
	I1014 20:01:44.686757  399048 main.go:141] libmachine: (test-preload-020721) DBG |     <boot dev='hd'/>
	I1014 20:01:44.686861  399048 main.go:141] libmachine: (test-preload-020721) DBG |     <bootmenu enable='no'/>
	I1014 20:01:44.686893  399048 main.go:141] libmachine: (test-preload-020721) DBG |   </os>
	I1014 20:01:44.686901  399048 main.go:141] libmachine: (test-preload-020721) DBG |   <features>
	I1014 20:01:44.686918  399048 main.go:141] libmachine: (test-preload-020721) DBG |     <acpi/>
	I1014 20:01:44.686933  399048 main.go:141] libmachine: (test-preload-020721) DBG |     <apic/>
	I1014 20:01:44.686956  399048 main.go:141] libmachine: (test-preload-020721) DBG |     <pae/>
	I1014 20:01:44.686965  399048 main.go:141] libmachine: (test-preload-020721) DBG |   </features>
	I1014 20:01:44.686983  399048 main.go:141] libmachine: (test-preload-020721) DBG |   <cpu mode='host-passthrough' check='none' migratable='on'/>
	I1014 20:01:44.686996  399048 main.go:141] libmachine: (test-preload-020721) DBG |   <clock offset='utc'/>
	I1014 20:01:44.687006  399048 main.go:141] libmachine: (test-preload-020721) DBG |   <on_poweroff>destroy</on_poweroff>
	I1014 20:01:44.687015  399048 main.go:141] libmachine: (test-preload-020721) DBG |   <on_reboot>restart</on_reboot>
	I1014 20:01:44.687020  399048 main.go:141] libmachine: (test-preload-020721) DBG |   <on_crash>destroy</on_crash>
	I1014 20:01:44.687033  399048 main.go:141] libmachine: (test-preload-020721) DBG |   <devices>
	I1014 20:01:44.687065  399048 main.go:141] libmachine: (test-preload-020721) DBG |     <emulator>/usr/bin/qemu-system-x86_64</emulator>
	I1014 20:01:44.687090  399048 main.go:141] libmachine: (test-preload-020721) DBG |     <disk type='file' device='cdrom'>
	I1014 20:01:44.687106  399048 main.go:141] libmachine: (test-preload-020721) DBG |       <driver name='qemu' type='raw'/>
	I1014 20:01:44.687123  399048 main.go:141] libmachine: (test-preload-020721) DBG |       <source file='/home/jenkins/minikube-integration/21409-364627/.minikube/machines/test-preload-020721/boot2docker.iso'/>
	I1014 20:01:44.687136  399048 main.go:141] libmachine: (test-preload-020721) DBG |       <target dev='hdc' bus='scsi'/>
	I1014 20:01:44.687146  399048 main.go:141] libmachine: (test-preload-020721) DBG |       <readonly/>
	I1014 20:01:44.687157  399048 main.go:141] libmachine: (test-preload-020721) DBG |       <address type='drive' controller='0' bus='0' target='0' unit='2'/>
	I1014 20:01:44.687172  399048 main.go:141] libmachine: (test-preload-020721) DBG |     </disk>
	I1014 20:01:44.687184  399048 main.go:141] libmachine: (test-preload-020721) DBG |     <disk type='file' device='disk'>
	I1014 20:01:44.687196  399048 main.go:141] libmachine: (test-preload-020721) DBG |       <driver name='qemu' type='raw' io='threads'/>
	I1014 20:01:44.687211  399048 main.go:141] libmachine: (test-preload-020721) DBG |       <source file='/home/jenkins/minikube-integration/21409-364627/.minikube/machines/test-preload-020721/test-preload-020721.rawdisk'/>
	I1014 20:01:44.687222  399048 main.go:141] libmachine: (test-preload-020721) DBG |       <target dev='hda' bus='virtio'/>
	I1014 20:01:44.687234  399048 main.go:141] libmachine: (test-preload-020721) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
	I1014 20:01:44.687266  399048 main.go:141] libmachine: (test-preload-020721) DBG |     </disk>
	I1014 20:01:44.687281  399048 main.go:141] libmachine: (test-preload-020721) DBG |     <controller type='usb' index='0' model='piix3-uhci'>
	I1014 20:01:44.687296  399048 main.go:141] libmachine: (test-preload-020721) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
	I1014 20:01:44.687305  399048 main.go:141] libmachine: (test-preload-020721) DBG |     </controller>
	I1014 20:01:44.687338  399048 main.go:141] libmachine: (test-preload-020721) DBG |     <controller type='pci' index='0' model='pci-root'/>
	I1014 20:01:44.687352  399048 main.go:141] libmachine: (test-preload-020721) DBG |     <controller type='scsi' index='0' model='lsilogic'>
	I1014 20:01:44.687365  399048 main.go:141] libmachine: (test-preload-020721) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
	I1014 20:01:44.687376  399048 main.go:141] libmachine: (test-preload-020721) DBG |     </controller>
	I1014 20:01:44.687385  399048 main.go:141] libmachine: (test-preload-020721) DBG |     <interface type='network'>
	I1014 20:01:44.687396  399048 main.go:141] libmachine: (test-preload-020721) DBG |       <mac address='52:54:00:8d:6c:97'/>
	I1014 20:01:44.687413  399048 main.go:141] libmachine: (test-preload-020721) DBG |       <source network='mk-test-preload-020721'/>
	I1014 20:01:44.687426  399048 main.go:141] libmachine: (test-preload-020721) DBG |       <model type='virtio'/>
	I1014 20:01:44.687438  399048 main.go:141] libmachine: (test-preload-020721) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
	I1014 20:01:44.687447  399048 main.go:141] libmachine: (test-preload-020721) DBG |     </interface>
	I1014 20:01:44.687459  399048 main.go:141] libmachine: (test-preload-020721) DBG |     <interface type='network'>
	I1014 20:01:44.687467  399048 main.go:141] libmachine: (test-preload-020721) DBG |       <mac address='52:54:00:48:9a:67'/>
	I1014 20:01:44.687475  399048 main.go:141] libmachine: (test-preload-020721) DBG |       <source network='default'/>
	I1014 20:01:44.687482  399048 main.go:141] libmachine: (test-preload-020721) DBG |       <model type='virtio'/>
	I1014 20:01:44.687512  399048 main.go:141] libmachine: (test-preload-020721) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
	I1014 20:01:44.687534  399048 main.go:141] libmachine: (test-preload-020721) DBG |     </interface>
	I1014 20:01:44.687546  399048 main.go:141] libmachine: (test-preload-020721) DBG |     <serial type='pty'>
	I1014 20:01:44.687556  399048 main.go:141] libmachine: (test-preload-020721) DBG |       <target type='isa-serial' port='0'>
	I1014 20:01:44.687566  399048 main.go:141] libmachine: (test-preload-020721) DBG |         <model name='isa-serial'/>
	I1014 20:01:44.687574  399048 main.go:141] libmachine: (test-preload-020721) DBG |       </target>
	I1014 20:01:44.687582  399048 main.go:141] libmachine: (test-preload-020721) DBG |     </serial>
	I1014 20:01:44.687592  399048 main.go:141] libmachine: (test-preload-020721) DBG |     <console type='pty'>
	I1014 20:01:44.687603  399048 main.go:141] libmachine: (test-preload-020721) DBG |       <target type='serial' port='0'/>
	I1014 20:01:44.687612  399048 main.go:141] libmachine: (test-preload-020721) DBG |     </console>
	I1014 20:01:44.687621  399048 main.go:141] libmachine: (test-preload-020721) DBG |     <input type='mouse' bus='ps2'/>
	I1014 20:01:44.687635  399048 main.go:141] libmachine: (test-preload-020721) DBG |     <input type='keyboard' bus='ps2'/>
	I1014 20:01:44.687647  399048 main.go:141] libmachine: (test-preload-020721) DBG |     <audio id='1' type='none'/>
	I1014 20:01:44.687677  399048 main.go:141] libmachine: (test-preload-020721) DBG |     <memballoon model='virtio'>
	I1014 20:01:44.687690  399048 main.go:141] libmachine: (test-preload-020721) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
	I1014 20:01:44.687700  399048 main.go:141] libmachine: (test-preload-020721) DBG |     </memballoon>
	I1014 20:01:44.687708  399048 main.go:141] libmachine: (test-preload-020721) DBG |     <rng model='virtio'>
	I1014 20:01:44.687722  399048 main.go:141] libmachine: (test-preload-020721) DBG |       <backend model='random'>/dev/random</backend>
	I1014 20:01:44.687739  399048 main.go:141] libmachine: (test-preload-020721) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
	I1014 20:01:44.687749  399048 main.go:141] libmachine: (test-preload-020721) DBG |     </rng>
	I1014 20:01:44.687760  399048 main.go:141] libmachine: (test-preload-020721) DBG |   </devices>
	I1014 20:01:44.687770  399048 main.go:141] libmachine: (test-preload-020721) DBG | </domain>
	I1014 20:01:44.687783  399048 main.go:141] libmachine: (test-preload-020721) DBG | 
	I1014 20:01:45.952184  399048 main.go:141] libmachine: (test-preload-020721) waiting for domain to start...
	I1014 20:01:45.953564  399048 main.go:141] libmachine: (test-preload-020721) domain is now running
	I1014 20:01:45.953589  399048 main.go:141] libmachine: (test-preload-020721) waiting for IP...
	I1014 20:01:45.954624  399048 main.go:141] libmachine: (test-preload-020721) DBG | domain test-preload-020721 has defined MAC address 52:54:00:8d:6c:97 in network mk-test-preload-020721
	I1014 20:01:45.955258  399048 main.go:141] libmachine: (test-preload-020721) found domain IP: 192.168.39.188
	I1014 20:01:45.955277  399048 main.go:141] libmachine: (test-preload-020721) reserving static IP address...
	I1014 20:01:45.955286  399048 main.go:141] libmachine: (test-preload-020721) DBG | domain test-preload-020721 has current primary IP address 192.168.39.188 and MAC address 52:54:00:8d:6c:97 in network mk-test-preload-020721
	I1014 20:01:45.955824  399048 main.go:141] libmachine: (test-preload-020721) DBG | found host DHCP lease matching {name: "test-preload-020721", mac: "52:54:00:8d:6c:97", ip: "192.168.39.188"} in network mk-test-preload-020721: {Iface:virbr1 ExpiryTime:2025-10-14 21:00:08 +0000 UTC Type:0 Mac:52:54:00:8d:6c:97 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:test-preload-020721 Clientid:01:52:54:00:8d:6c:97}
	I1014 20:01:45.955851  399048 main.go:141] libmachine: (test-preload-020721) reserved static IP address 192.168.39.188 for domain test-preload-020721
	I1014 20:01:45.955869  399048 main.go:141] libmachine: (test-preload-020721) DBG | skip adding static IP to network mk-test-preload-020721 - found existing host DHCP lease matching {name: "test-preload-020721", mac: "52:54:00:8d:6c:97", ip: "192.168.39.188"}
	I1014 20:01:45.955884  399048 main.go:141] libmachine: (test-preload-020721) waiting for SSH...
	I1014 20:01:45.955943  399048 main.go:141] libmachine: (test-preload-020721) DBG | Getting to WaitForSSH function...
	I1014 20:01:45.958270  399048 main.go:141] libmachine: (test-preload-020721) DBG | domain test-preload-020721 has defined MAC address 52:54:00:8d:6c:97 in network mk-test-preload-020721
	I1014 20:01:45.958615  399048 main.go:141] libmachine: (test-preload-020721) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:6c:97", ip: ""} in network mk-test-preload-020721: {Iface:virbr1 ExpiryTime:2025-10-14 21:00:08 +0000 UTC Type:0 Mac:52:54:00:8d:6c:97 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:test-preload-020721 Clientid:01:52:54:00:8d:6c:97}
	I1014 20:01:45.958642  399048 main.go:141] libmachine: (test-preload-020721) DBG | domain test-preload-020721 has defined IP address 192.168.39.188 and MAC address 52:54:00:8d:6c:97 in network mk-test-preload-020721
	I1014 20:01:45.958772  399048 main.go:141] libmachine: (test-preload-020721) DBG | Using SSH client type: external
	I1014 20:01:45.958910  399048 main.go:141] libmachine: (test-preload-020721) DBG | Using SSH private key: /home/jenkins/minikube-integration/21409-364627/.minikube/machines/test-preload-020721/id_rsa (-rw-------)
	I1014 20:01:45.958958  399048 main.go:141] libmachine: (test-preload-020721) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.188 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/21409-364627/.minikube/machines/test-preload-020721/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1014 20:01:45.958974  399048 main.go:141] libmachine: (test-preload-020721) DBG | About to run SSH command:
	I1014 20:01:45.958982  399048 main.go:141] libmachine: (test-preload-020721) DBG | exit 0
	I1014 20:01:56.208120  399048 main.go:141] libmachine: (test-preload-020721) DBG | SSH cmd err, output: exit status 255: 
	I1014 20:01:56.208153  399048 main.go:141] libmachine: (test-preload-020721) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I1014 20:01:56.208163  399048 main.go:141] libmachine: (test-preload-020721) DBG | command : exit 0
	I1014 20:01:56.208172  399048 main.go:141] libmachine: (test-preload-020721) DBG | err     : exit status 255
	I1014 20:01:56.208183  399048 main.go:141] libmachine: (test-preload-020721) DBG | output  : 
	I1014 20:01:59.210306  399048 main.go:141] libmachine: (test-preload-020721) DBG | Getting to WaitForSSH function...
	I1014 20:01:59.213394  399048 main.go:141] libmachine: (test-preload-020721) DBG | domain test-preload-020721 has defined MAC address 52:54:00:8d:6c:97 in network mk-test-preload-020721
	I1014 20:01:59.213835  399048 main.go:141] libmachine: (test-preload-020721) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:6c:97", ip: ""} in network mk-test-preload-020721: {Iface:virbr1 ExpiryTime:2025-10-14 21:01:56 +0000 UTC Type:0 Mac:52:54:00:8d:6c:97 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:test-preload-020721 Clientid:01:52:54:00:8d:6c:97}
	I1014 20:01:59.213859  399048 main.go:141] libmachine: (test-preload-020721) DBG | domain test-preload-020721 has defined IP address 192.168.39.188 and MAC address 52:54:00:8d:6c:97 in network mk-test-preload-020721
	I1014 20:01:59.214082  399048 main.go:141] libmachine: (test-preload-020721) DBG | Using SSH client type: external
	I1014 20:01:59.214106  399048 main.go:141] libmachine: (test-preload-020721) DBG | Using SSH private key: /home/jenkins/minikube-integration/21409-364627/.minikube/machines/test-preload-020721/id_rsa (-rw-------)
	I1014 20:01:59.214124  399048 main.go:141] libmachine: (test-preload-020721) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.188 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/21409-364627/.minikube/machines/test-preload-020721/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1014 20:01:59.214134  399048 main.go:141] libmachine: (test-preload-020721) DBG | About to run SSH command:
	I1014 20:01:59.214149  399048 main.go:141] libmachine: (test-preload-020721) DBG | exit 0
	I1014 20:01:59.347278  399048 main.go:141] libmachine: (test-preload-020721) DBG | SSH cmd err, output: <nil>: 
	I1014 20:01:59.347807  399048 main.go:141] libmachine: (test-preload-020721) Calling .GetConfigRaw
	I1014 20:01:59.348512  399048 main.go:141] libmachine: (test-preload-020721) Calling .GetIP
	I1014 20:01:59.351417  399048 main.go:141] libmachine: (test-preload-020721) DBG | domain test-preload-020721 has defined MAC address 52:54:00:8d:6c:97 in network mk-test-preload-020721
	I1014 20:01:59.351819  399048 main.go:141] libmachine: (test-preload-020721) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:6c:97", ip: ""} in network mk-test-preload-020721: {Iface:virbr1 ExpiryTime:2025-10-14 21:01:56 +0000 UTC Type:0 Mac:52:54:00:8d:6c:97 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:test-preload-020721 Clientid:01:52:54:00:8d:6c:97}
	I1014 20:01:59.351844  399048 main.go:141] libmachine: (test-preload-020721) DBG | domain test-preload-020721 has defined IP address 192.168.39.188 and MAC address 52:54:00:8d:6c:97 in network mk-test-preload-020721
	I1014 20:01:59.352132  399048 profile.go:143] Saving config to /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/test-preload-020721/config.json ...
	I1014 20:01:59.352370  399048 machine.go:93] provisionDockerMachine start ...
	I1014 20:01:59.352390  399048 main.go:141] libmachine: (test-preload-020721) Calling .DriverName
	I1014 20:01:59.352608  399048 main.go:141] libmachine: (test-preload-020721) Calling .GetSSHHostname
	I1014 20:01:59.355477  399048 main.go:141] libmachine: (test-preload-020721) DBG | domain test-preload-020721 has defined MAC address 52:54:00:8d:6c:97 in network mk-test-preload-020721
	I1014 20:01:59.355878  399048 main.go:141] libmachine: (test-preload-020721) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:6c:97", ip: ""} in network mk-test-preload-020721: {Iface:virbr1 ExpiryTime:2025-10-14 21:01:56 +0000 UTC Type:0 Mac:52:54:00:8d:6c:97 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:test-preload-020721 Clientid:01:52:54:00:8d:6c:97}
	I1014 20:01:59.355904  399048 main.go:141] libmachine: (test-preload-020721) DBG | domain test-preload-020721 has defined IP address 192.168.39.188 and MAC address 52:54:00:8d:6c:97 in network mk-test-preload-020721
	I1014 20:01:59.356048  399048 main.go:141] libmachine: (test-preload-020721) Calling .GetSSHPort
	I1014 20:01:59.356234  399048 main.go:141] libmachine: (test-preload-020721) Calling .GetSSHKeyPath
	I1014 20:01:59.356492  399048 main.go:141] libmachine: (test-preload-020721) Calling .GetSSHKeyPath
	I1014 20:01:59.356679  399048 main.go:141] libmachine: (test-preload-020721) Calling .GetSSHUsername
	I1014 20:01:59.356899  399048 main.go:141] libmachine: Using SSH client type: native
	I1014 20:01:59.357179  399048 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.188 22 <nil> <nil>}
	I1014 20:01:59.357192  399048 main.go:141] libmachine: About to run SSH command:
	hostname
	I1014 20:01:59.467325  399048 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1014 20:01:59.467361  399048 main.go:141] libmachine: (test-preload-020721) Calling .GetMachineName
	I1014 20:01:59.467659  399048 buildroot.go:166] provisioning hostname "test-preload-020721"
	I1014 20:01:59.467701  399048 main.go:141] libmachine: (test-preload-020721) Calling .GetMachineName
	I1014 20:01:59.467913  399048 main.go:141] libmachine: (test-preload-020721) Calling .GetSSHHostname
	I1014 20:01:59.470914  399048 main.go:141] libmachine: (test-preload-020721) DBG | domain test-preload-020721 has defined MAC address 52:54:00:8d:6c:97 in network mk-test-preload-020721
	I1014 20:01:59.471239  399048 main.go:141] libmachine: (test-preload-020721) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:6c:97", ip: ""} in network mk-test-preload-020721: {Iface:virbr1 ExpiryTime:2025-10-14 21:01:56 +0000 UTC Type:0 Mac:52:54:00:8d:6c:97 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:test-preload-020721 Clientid:01:52:54:00:8d:6c:97}
	I1014 20:01:59.471271  399048 main.go:141] libmachine: (test-preload-020721) DBG | domain test-preload-020721 has defined IP address 192.168.39.188 and MAC address 52:54:00:8d:6c:97 in network mk-test-preload-020721
	I1014 20:01:59.471454  399048 main.go:141] libmachine: (test-preload-020721) Calling .GetSSHPort
	I1014 20:01:59.471656  399048 main.go:141] libmachine: (test-preload-020721) Calling .GetSSHKeyPath
	I1014 20:01:59.471843  399048 main.go:141] libmachine: (test-preload-020721) Calling .GetSSHKeyPath
	I1014 20:01:59.472023  399048 main.go:141] libmachine: (test-preload-020721) Calling .GetSSHUsername
	I1014 20:01:59.472206  399048 main.go:141] libmachine: Using SSH client type: native
	I1014 20:01:59.472440  399048 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.188 22 <nil> <nil>}
	I1014 20:01:59.472453  399048 main.go:141] libmachine: About to run SSH command:
	sudo hostname test-preload-020721 && echo "test-preload-020721" | sudo tee /etc/hostname
	I1014 20:01:59.600439  399048 main.go:141] libmachine: SSH cmd err, output: <nil>: test-preload-020721
	
	I1014 20:01:59.600469  399048 main.go:141] libmachine: (test-preload-020721) Calling .GetSSHHostname
	I1014 20:01:59.603810  399048 main.go:141] libmachine: (test-preload-020721) DBG | domain test-preload-020721 has defined MAC address 52:54:00:8d:6c:97 in network mk-test-preload-020721
	I1014 20:01:59.604174  399048 main.go:141] libmachine: (test-preload-020721) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:6c:97", ip: ""} in network mk-test-preload-020721: {Iface:virbr1 ExpiryTime:2025-10-14 21:01:56 +0000 UTC Type:0 Mac:52:54:00:8d:6c:97 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:test-preload-020721 Clientid:01:52:54:00:8d:6c:97}
	I1014 20:01:59.604206  399048 main.go:141] libmachine: (test-preload-020721) DBG | domain test-preload-020721 has defined IP address 192.168.39.188 and MAC address 52:54:00:8d:6c:97 in network mk-test-preload-020721
	I1014 20:01:59.604437  399048 main.go:141] libmachine: (test-preload-020721) Calling .GetSSHPort
	I1014 20:01:59.604691  399048 main.go:141] libmachine: (test-preload-020721) Calling .GetSSHKeyPath
	I1014 20:01:59.604847  399048 main.go:141] libmachine: (test-preload-020721) Calling .GetSSHKeyPath
	I1014 20:01:59.605097  399048 main.go:141] libmachine: (test-preload-020721) Calling .GetSSHUsername
	I1014 20:01:59.605261  399048 main.go:141] libmachine: Using SSH client type: native
	I1014 20:01:59.605511  399048 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.188 22 <nil> <nil>}
	I1014 20:01:59.605530  399048 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\stest-preload-020721' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 test-preload-020721/g' /etc/hosts;
				else 
					echo '127.0.1.1 test-preload-020721' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1014 20:01:59.726019  399048 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1014 20:01:59.726054  399048 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21409-364627/.minikube CaCertPath:/home/jenkins/minikube-integration/21409-364627/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21409-364627/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21409-364627/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21409-364627/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21409-364627/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21409-364627/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21409-364627/.minikube}
	I1014 20:01:59.726082  399048 buildroot.go:174] setting up certificates
	I1014 20:01:59.726096  399048 provision.go:84] configureAuth start
	I1014 20:01:59.726112  399048 main.go:141] libmachine: (test-preload-020721) Calling .GetMachineName
	I1014 20:01:59.726543  399048 main.go:141] libmachine: (test-preload-020721) Calling .GetIP
	I1014 20:01:59.729734  399048 main.go:141] libmachine: (test-preload-020721) DBG | domain test-preload-020721 has defined MAC address 52:54:00:8d:6c:97 in network mk-test-preload-020721
	I1014 20:01:59.730183  399048 main.go:141] libmachine: (test-preload-020721) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:6c:97", ip: ""} in network mk-test-preload-020721: {Iface:virbr1 ExpiryTime:2025-10-14 21:01:56 +0000 UTC Type:0 Mac:52:54:00:8d:6c:97 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:test-preload-020721 Clientid:01:52:54:00:8d:6c:97}
	I1014 20:01:59.730214  399048 main.go:141] libmachine: (test-preload-020721) DBG | domain test-preload-020721 has defined IP address 192.168.39.188 and MAC address 52:54:00:8d:6c:97 in network mk-test-preload-020721
	I1014 20:01:59.730452  399048 main.go:141] libmachine: (test-preload-020721) Calling .GetSSHHostname
	I1014 20:01:59.732824  399048 main.go:141] libmachine: (test-preload-020721) DBG | domain test-preload-020721 has defined MAC address 52:54:00:8d:6c:97 in network mk-test-preload-020721
	I1014 20:01:59.733280  399048 main.go:141] libmachine: (test-preload-020721) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:6c:97", ip: ""} in network mk-test-preload-020721: {Iface:virbr1 ExpiryTime:2025-10-14 21:01:56 +0000 UTC Type:0 Mac:52:54:00:8d:6c:97 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:test-preload-020721 Clientid:01:52:54:00:8d:6c:97}
	I1014 20:01:59.733307  399048 main.go:141] libmachine: (test-preload-020721) DBG | domain test-preload-020721 has defined IP address 192.168.39.188 and MAC address 52:54:00:8d:6c:97 in network mk-test-preload-020721
	I1014 20:01:59.733475  399048 provision.go:143] copyHostCerts
	I1014 20:01:59.733530  399048 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-364627/.minikube/ca.pem, removing ...
	I1014 20:01:59.733550  399048 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-364627/.minikube/ca.pem
	I1014 20:01:59.733617  399048 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-364627/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21409-364627/.minikube/ca.pem (1082 bytes)
	I1014 20:01:59.733724  399048 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-364627/.minikube/cert.pem, removing ...
	I1014 20:01:59.733733  399048 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-364627/.minikube/cert.pem
	I1014 20:01:59.733759  399048 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-364627/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21409-364627/.minikube/cert.pem (1123 bytes)
	I1014 20:01:59.733883  399048 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-364627/.minikube/key.pem, removing ...
	I1014 20:01:59.733894  399048 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-364627/.minikube/key.pem
	I1014 20:01:59.733920  399048 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-364627/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21409-364627/.minikube/key.pem (1675 bytes)
	I1014 20:01:59.733982  399048 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21409-364627/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21409-364627/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21409-364627/.minikube/certs/ca-key.pem org=jenkins.test-preload-020721 san=[127.0.0.1 192.168.39.188 localhost minikube test-preload-020721]
	I1014 20:01:59.889067  399048 provision.go:177] copyRemoteCerts
	I1014 20:01:59.889141  399048 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1014 20:01:59.889168  399048 main.go:141] libmachine: (test-preload-020721) Calling .GetSSHHostname
	I1014 20:01:59.892278  399048 main.go:141] libmachine: (test-preload-020721) DBG | domain test-preload-020721 has defined MAC address 52:54:00:8d:6c:97 in network mk-test-preload-020721
	I1014 20:01:59.892650  399048 main.go:141] libmachine: (test-preload-020721) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:6c:97", ip: ""} in network mk-test-preload-020721: {Iface:virbr1 ExpiryTime:2025-10-14 21:01:56 +0000 UTC Type:0 Mac:52:54:00:8d:6c:97 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:test-preload-020721 Clientid:01:52:54:00:8d:6c:97}
	I1014 20:01:59.892685  399048 main.go:141] libmachine: (test-preload-020721) DBG | domain test-preload-020721 has defined IP address 192.168.39.188 and MAC address 52:54:00:8d:6c:97 in network mk-test-preload-020721
	I1014 20:01:59.892930  399048 main.go:141] libmachine: (test-preload-020721) Calling .GetSSHPort
	I1014 20:01:59.893160  399048 main.go:141] libmachine: (test-preload-020721) Calling .GetSSHKeyPath
	I1014 20:01:59.893366  399048 main.go:141] libmachine: (test-preload-020721) Calling .GetSSHUsername
	I1014 20:01:59.893523  399048 sshutil.go:53] new ssh client: &{IP:192.168.39.188 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21409-364627/.minikube/machines/test-preload-020721/id_rsa Username:docker}
	I1014 20:01:59.980067  399048 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-364627/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1014 20:02:00.008575  399048 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-364627/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1014 20:02:00.036734  399048 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-364627/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1014 20:02:00.064978  399048 provision.go:87] duration metric: took 338.864968ms to configureAuth
	I1014 20:02:00.065008  399048 buildroot.go:189] setting minikube options for container-runtime
	I1014 20:02:00.065225  399048 config.go:182] Loaded profile config "test-preload-020721": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1014 20:02:00.065341  399048 main.go:141] libmachine: (test-preload-020721) Calling .GetSSHHostname
	I1014 20:02:00.068294  399048 main.go:141] libmachine: (test-preload-020721) DBG | domain test-preload-020721 has defined MAC address 52:54:00:8d:6c:97 in network mk-test-preload-020721
	I1014 20:02:00.068685  399048 main.go:141] libmachine: (test-preload-020721) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:6c:97", ip: ""} in network mk-test-preload-020721: {Iface:virbr1 ExpiryTime:2025-10-14 21:01:56 +0000 UTC Type:0 Mac:52:54:00:8d:6c:97 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:test-preload-020721 Clientid:01:52:54:00:8d:6c:97}
	I1014 20:02:00.068721  399048 main.go:141] libmachine: (test-preload-020721) DBG | domain test-preload-020721 has defined IP address 192.168.39.188 and MAC address 52:54:00:8d:6c:97 in network mk-test-preload-020721
	I1014 20:02:00.068920  399048 main.go:141] libmachine: (test-preload-020721) Calling .GetSSHPort
	I1014 20:02:00.069149  399048 main.go:141] libmachine: (test-preload-020721) Calling .GetSSHKeyPath
	I1014 20:02:00.069365  399048 main.go:141] libmachine: (test-preload-020721) Calling .GetSSHKeyPath
	I1014 20:02:00.069615  399048 main.go:141] libmachine: (test-preload-020721) Calling .GetSSHUsername
	I1014 20:02:00.069799  399048 main.go:141] libmachine: Using SSH client type: native
	I1014 20:02:00.069989  399048 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.188 22 <nil> <nil>}
	I1014 20:02:00.070014  399048 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1014 20:02:00.319054  399048 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1014 20:02:00.319088  399048 machine.go:96] duration metric: took 966.703012ms to provisionDockerMachine
	I1014 20:02:00.319104  399048 start.go:293] postStartSetup for "test-preload-020721" (driver="kvm2")
	I1014 20:02:00.319115  399048 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1014 20:02:00.319137  399048 main.go:141] libmachine: (test-preload-020721) Calling .DriverName
	I1014 20:02:00.319495  399048 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1014 20:02:00.319584  399048 main.go:141] libmachine: (test-preload-020721) Calling .GetSSHHostname
	I1014 20:02:00.323088  399048 main.go:141] libmachine: (test-preload-020721) DBG | domain test-preload-020721 has defined MAC address 52:54:00:8d:6c:97 in network mk-test-preload-020721
	I1014 20:02:00.323515  399048 main.go:141] libmachine: (test-preload-020721) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:6c:97", ip: ""} in network mk-test-preload-020721: {Iface:virbr1 ExpiryTime:2025-10-14 21:01:56 +0000 UTC Type:0 Mac:52:54:00:8d:6c:97 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:test-preload-020721 Clientid:01:52:54:00:8d:6c:97}
	I1014 20:02:00.323547  399048 main.go:141] libmachine: (test-preload-020721) DBG | domain test-preload-020721 has defined IP address 192.168.39.188 and MAC address 52:54:00:8d:6c:97 in network mk-test-preload-020721
	I1014 20:02:00.323716  399048 main.go:141] libmachine: (test-preload-020721) Calling .GetSSHPort
	I1014 20:02:00.323965  399048 main.go:141] libmachine: (test-preload-020721) Calling .GetSSHKeyPath
	I1014 20:02:00.324144  399048 main.go:141] libmachine: (test-preload-020721) Calling .GetSSHUsername
	I1014 20:02:00.324307  399048 sshutil.go:53] new ssh client: &{IP:192.168.39.188 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21409-364627/.minikube/machines/test-preload-020721/id_rsa Username:docker}
	I1014 20:02:00.411030  399048 ssh_runner.go:195] Run: cat /etc/os-release
	I1014 20:02:00.415669  399048 info.go:137] Remote host: Buildroot 2025.02
	I1014 20:02:00.415699  399048 filesync.go:126] Scanning /home/jenkins/minikube-integration/21409-364627/.minikube/addons for local assets ...
	I1014 20:02:00.415770  399048 filesync.go:126] Scanning /home/jenkins/minikube-integration/21409-364627/.minikube/files for local assets ...
	I1014 20:02:00.415862  399048 filesync.go:149] local asset: /home/jenkins/minikube-integration/21409-364627/.minikube/files/etc/ssl/certs/3686342.pem -> 3686342.pem in /etc/ssl/certs
	I1014 20:02:00.415997  399048 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1014 20:02:00.427597  399048 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-364627/.minikube/files/etc/ssl/certs/3686342.pem --> /etc/ssl/certs/3686342.pem (1708 bytes)
	I1014 20:02:00.457072  399048 start.go:296] duration metric: took 137.94869ms for postStartSetup
	I1014 20:02:00.457129  399048 fix.go:56] duration metric: took 15.793712798s for fixHost
	I1014 20:02:00.457158  399048 main.go:141] libmachine: (test-preload-020721) Calling .GetSSHHostname
	I1014 20:02:00.460119  399048 main.go:141] libmachine: (test-preload-020721) DBG | domain test-preload-020721 has defined MAC address 52:54:00:8d:6c:97 in network mk-test-preload-020721
	I1014 20:02:00.460463  399048 main.go:141] libmachine: (test-preload-020721) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:6c:97", ip: ""} in network mk-test-preload-020721: {Iface:virbr1 ExpiryTime:2025-10-14 21:01:56 +0000 UTC Type:0 Mac:52:54:00:8d:6c:97 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:test-preload-020721 Clientid:01:52:54:00:8d:6c:97}
	I1014 20:02:00.460486  399048 main.go:141] libmachine: (test-preload-020721) DBG | domain test-preload-020721 has defined IP address 192.168.39.188 and MAC address 52:54:00:8d:6c:97 in network mk-test-preload-020721
	I1014 20:02:00.460766  399048 main.go:141] libmachine: (test-preload-020721) Calling .GetSSHPort
	I1014 20:02:00.461145  399048 main.go:141] libmachine: (test-preload-020721) Calling .GetSSHKeyPath
	I1014 20:02:00.461371  399048 main.go:141] libmachine: (test-preload-020721) Calling .GetSSHKeyPath
	I1014 20:02:00.461545  399048 main.go:141] libmachine: (test-preload-020721) Calling .GetSSHUsername
	I1014 20:02:00.461758  399048 main.go:141] libmachine: Using SSH client type: native
	I1014 20:02:00.461975  399048 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.188 22 <nil> <nil>}
	I1014 20:02:00.461986  399048 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1014 20:02:00.573559  399048 main.go:141] libmachine: SSH cmd err, output: <nil>: 1760472120.528659778
	
	I1014 20:02:00.573600  399048 fix.go:216] guest clock: 1760472120.528659778
	I1014 20:02:00.573610  399048 fix.go:229] Guest: 2025-10-14 20:02:00.528659778 +0000 UTC Remote: 2025-10-14 20:02:00.457134888 +0000 UTC m=+26.198675024 (delta=71.52489ms)
	I1014 20:02:00.573644  399048 fix.go:200] guest clock delta is within tolerance: 71.52489ms
	I1014 20:02:00.573650  399048 start.go:83] releasing machines lock for "test-preload-020721", held for 15.910256819s
	I1014 20:02:00.573674  399048 main.go:141] libmachine: (test-preload-020721) Calling .DriverName
	I1014 20:02:00.573945  399048 main.go:141] libmachine: (test-preload-020721) Calling .GetIP
	I1014 20:02:00.577015  399048 main.go:141] libmachine: (test-preload-020721) DBG | domain test-preload-020721 has defined MAC address 52:54:00:8d:6c:97 in network mk-test-preload-020721
	I1014 20:02:00.577419  399048 main.go:141] libmachine: (test-preload-020721) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:6c:97", ip: ""} in network mk-test-preload-020721: {Iface:virbr1 ExpiryTime:2025-10-14 21:01:56 +0000 UTC Type:0 Mac:52:54:00:8d:6c:97 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:test-preload-020721 Clientid:01:52:54:00:8d:6c:97}
	I1014 20:02:00.577440  399048 main.go:141] libmachine: (test-preload-020721) DBG | domain test-preload-020721 has defined IP address 192.168.39.188 and MAC address 52:54:00:8d:6c:97 in network mk-test-preload-020721
	I1014 20:02:00.577615  399048 main.go:141] libmachine: (test-preload-020721) Calling .DriverName
	I1014 20:02:00.578170  399048 main.go:141] libmachine: (test-preload-020721) Calling .DriverName
	I1014 20:02:00.578401  399048 main.go:141] libmachine: (test-preload-020721) Calling .DriverName
	I1014 20:02:00.578543  399048 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1014 20:02:00.578603  399048 main.go:141] libmachine: (test-preload-020721) Calling .GetSSHHostname
	I1014 20:02:00.578606  399048 ssh_runner.go:195] Run: cat /version.json
	I1014 20:02:00.578630  399048 main.go:141] libmachine: (test-preload-020721) Calling .GetSSHHostname
	I1014 20:02:00.582161  399048 main.go:141] libmachine: (test-preload-020721) DBG | domain test-preload-020721 has defined MAC address 52:54:00:8d:6c:97 in network mk-test-preload-020721
	I1014 20:02:00.582348  399048 main.go:141] libmachine: (test-preload-020721) DBG | domain test-preload-020721 has defined MAC address 52:54:00:8d:6c:97 in network mk-test-preload-020721
	I1014 20:02:00.582575  399048 main.go:141] libmachine: (test-preload-020721) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:6c:97", ip: ""} in network mk-test-preload-020721: {Iface:virbr1 ExpiryTime:2025-10-14 21:01:56 +0000 UTC Type:0 Mac:52:54:00:8d:6c:97 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:test-preload-020721 Clientid:01:52:54:00:8d:6c:97}
	I1014 20:02:00.582605  399048 main.go:141] libmachine: (test-preload-020721) DBG | domain test-preload-020721 has defined IP address 192.168.39.188 and MAC address 52:54:00:8d:6c:97 in network mk-test-preload-020721
	I1014 20:02:00.582748  399048 main.go:141] libmachine: (test-preload-020721) Calling .GetSSHPort
	I1014 20:02:00.582900  399048 main.go:141] libmachine: (test-preload-020721) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:6c:97", ip: ""} in network mk-test-preload-020721: {Iface:virbr1 ExpiryTime:2025-10-14 21:01:56 +0000 UTC Type:0 Mac:52:54:00:8d:6c:97 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:test-preload-020721 Clientid:01:52:54:00:8d:6c:97}
	I1014 20:02:00.582923  399048 main.go:141] libmachine: (test-preload-020721) DBG | domain test-preload-020721 has defined IP address 192.168.39.188 and MAC address 52:54:00:8d:6c:97 in network mk-test-preload-020721
	I1014 20:02:00.582959  399048 main.go:141] libmachine: (test-preload-020721) Calling .GetSSHKeyPath
	I1014 20:02:00.583084  399048 main.go:141] libmachine: (test-preload-020721) Calling .GetSSHPort
	I1014 20:02:00.583162  399048 main.go:141] libmachine: (test-preload-020721) Calling .GetSSHUsername
	I1014 20:02:00.583272  399048 main.go:141] libmachine: (test-preload-020721) Calling .GetSSHKeyPath
	I1014 20:02:00.583347  399048 sshutil.go:53] new ssh client: &{IP:192.168.39.188 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21409-364627/.minikube/machines/test-preload-020721/id_rsa Username:docker}
	I1014 20:02:00.583427  399048 main.go:141] libmachine: (test-preload-020721) Calling .GetSSHUsername
	I1014 20:02:00.583553  399048 sshutil.go:53] new ssh client: &{IP:192.168.39.188 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21409-364627/.minikube/machines/test-preload-020721/id_rsa Username:docker}
	I1014 20:02:00.664859  399048 ssh_runner.go:195] Run: systemctl --version
	I1014 20:02:00.699553  399048 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1014 20:02:00.843885  399048 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1014 20:02:00.850729  399048 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1014 20:02:00.850817  399048 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1014 20:02:00.870896  399048 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1014 20:02:00.870921  399048 start.go:495] detecting cgroup driver to use...
	I1014 20:02:00.871080  399048 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1014 20:02:00.891175  399048 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1014 20:02:00.909376  399048 docker.go:218] disabling cri-docker service (if available) ...
	I1014 20:02:00.909451  399048 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1014 20:02:00.927527  399048 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1014 20:02:00.944805  399048 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1014 20:02:01.091423  399048 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1014 20:02:01.309669  399048 docker.go:234] disabling docker service ...
	I1014 20:02:01.309745  399048 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1014 20:02:01.326291  399048 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1014 20:02:01.342196  399048 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1014 20:02:01.498906  399048 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1014 20:02:01.640076  399048 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1014 20:02:01.655846  399048 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1014 20:02:01.679056  399048 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1014 20:02:01.679135  399048 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 20:02:01.691438  399048 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1014 20:02:01.691509  399048 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 20:02:01.704338  399048 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 20:02:01.717678  399048 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 20:02:01.730175  399048 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1014 20:02:01.743946  399048 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 20:02:01.756098  399048 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 20:02:01.776530  399048 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 20:02:01.789422  399048 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1014 20:02:01.799705  399048 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1014 20:02:01.799780  399048 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1014 20:02:01.819960  399048 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1014 20:02:01.831702  399048 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 20:02:01.967491  399048 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1014 20:02:02.076440  399048 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1014 20:02:02.076553  399048 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1014 20:02:02.081773  399048 start.go:563] Will wait 60s for crictl version
	I1014 20:02:02.081837  399048 ssh_runner.go:195] Run: which crictl
	I1014 20:02:02.085977  399048 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1014 20:02:02.129118  399048 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1014 20:02:02.129200  399048 ssh_runner.go:195] Run: crio --version
	I1014 20:02:02.158966  399048 ssh_runner.go:195] Run: crio --version
	I1014 20:02:02.192674  399048 out.go:179] * Preparing Kubernetes v1.32.0 on CRI-O 1.29.1 ...
	I1014 20:02:02.193973  399048 main.go:141] libmachine: (test-preload-020721) Calling .GetIP
	I1014 20:02:02.197160  399048 main.go:141] libmachine: (test-preload-020721) DBG | domain test-preload-020721 has defined MAC address 52:54:00:8d:6c:97 in network mk-test-preload-020721
	I1014 20:02:02.197582  399048 main.go:141] libmachine: (test-preload-020721) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:6c:97", ip: ""} in network mk-test-preload-020721: {Iface:virbr1 ExpiryTime:2025-10-14 21:01:56 +0000 UTC Type:0 Mac:52:54:00:8d:6c:97 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:test-preload-020721 Clientid:01:52:54:00:8d:6c:97}
	I1014 20:02:02.197618  399048 main.go:141] libmachine: (test-preload-020721) DBG | domain test-preload-020721 has defined IP address 192.168.39.188 and MAC address 52:54:00:8d:6c:97 in network mk-test-preload-020721
	I1014 20:02:02.197904  399048 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1014 20:02:02.202789  399048 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1014 20:02:02.218995  399048 kubeadm.go:883] updating cluster {Name:test-preload-020721 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:
v1.32.0 ClusterName:test-preload-020721 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.188 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1014 20:02:02.219123  399048 preload.go:183] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I1014 20:02:02.219168  399048 ssh_runner.go:195] Run: sudo crictl images --output json
	I1014 20:02:02.265978  399048 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.32.0". assuming images are not preloaded.
	I1014 20:02:02.266077  399048 ssh_runner.go:195] Run: which lz4
	I1014 20:02:02.272276  399048 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1014 20:02:02.279712  399048 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1014 20:02:02.279754  399048 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-364627/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (398646650 bytes)
	I1014 20:02:03.708847  399048 crio.go:462] duration metric: took 1.436603835s to copy over tarball
	I1014 20:02:03.708937  399048 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1014 20:02:05.372967  399048 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.663990963s)
	I1014 20:02:05.373005  399048 crio.go:469] duration metric: took 1.664116204s to extract the tarball
	I1014 20:02:05.373014  399048 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1014 20:02:05.413707  399048 ssh_runner.go:195] Run: sudo crictl images --output json
	I1014 20:02:05.457995  399048 crio.go:514] all images are preloaded for cri-o runtime.
	I1014 20:02:05.458030  399048 cache_images.go:85] Images are preloaded, skipping loading
	I1014 20:02:05.458039  399048 kubeadm.go:934] updating node { 192.168.39.188 8443 v1.32.0 crio true true} ...
	I1014 20:02:05.458154  399048 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=test-preload-020721 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.188
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.0 ClusterName:test-preload-020721 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1014 20:02:05.458224  399048 ssh_runner.go:195] Run: crio config
	I1014 20:02:05.505266  399048 cni.go:84] Creating CNI manager for ""
	I1014 20:02:05.505332  399048 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1014 20:02:05.505359  399048 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1014 20:02:05.505390  399048 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.188 APIServerPort:8443 KubernetesVersion:v1.32.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:test-preload-020721 NodeName:test-preload-020721 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.188"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.188 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1014 20:02:05.505566  399048 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.188
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "test-preload-020721"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.188"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.188"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.32.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1014 20:02:05.505653  399048 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.0
	I1014 20:02:05.518009  399048 binaries.go:44] Found k8s binaries, skipping transfer
	I1014 20:02:05.518082  399048 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1014 20:02:05.530056  399048 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (319 bytes)
	I1014 20:02:05.551146  399048 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1014 20:02:05.572171  399048 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2222 bytes)
	I1014 20:02:05.593262  399048 ssh_runner.go:195] Run: grep 192.168.39.188	control-plane.minikube.internal$ /etc/hosts
	I1014 20:02:05.597959  399048 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.188	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1014 20:02:05.612786  399048 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 20:02:05.753746  399048 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1014 20:02:05.788034  399048 certs.go:69] Setting up /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/test-preload-020721 for IP: 192.168.39.188
	I1014 20:02:05.788067  399048 certs.go:195] generating shared ca certs ...
	I1014 20:02:05.788091  399048 certs.go:227] acquiring lock for ca certs: {Name:mkddeaa8fb7f14aff32554669329c3967650976a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 20:02:05.788269  399048 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21409-364627/.minikube/ca.key
	I1014 20:02:05.788338  399048 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21409-364627/.minikube/proxy-client-ca.key
	I1014 20:02:05.788350  399048 certs.go:257] generating profile certs ...
	I1014 20:02:05.788450  399048 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/test-preload-020721/client.key
	I1014 20:02:05.788519  399048 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/test-preload-020721/apiserver.key.99498017
	I1014 20:02:05.788567  399048 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/test-preload-020721/proxy-client.key
	I1014 20:02:05.788726  399048 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-364627/.minikube/certs/368634.pem (1338 bytes)
	W1014 20:02:05.788756  399048 certs.go:480] ignoring /home/jenkins/minikube-integration/21409-364627/.minikube/certs/368634_empty.pem, impossibly tiny 0 bytes
	I1014 20:02:05.788763  399048 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-364627/.minikube/certs/ca-key.pem (1675 bytes)
	I1014 20:02:05.788786  399048 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-364627/.minikube/certs/ca.pem (1082 bytes)
	I1014 20:02:05.788807  399048 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-364627/.minikube/certs/cert.pem (1123 bytes)
	I1014 20:02:05.788830  399048 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-364627/.minikube/certs/key.pem (1675 bytes)
	I1014 20:02:05.788866  399048 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-364627/.minikube/files/etc/ssl/certs/3686342.pem (1708 bytes)
	I1014 20:02:05.789439  399048 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-364627/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1014 20:02:05.829125  399048 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-364627/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1014 20:02:05.878646  399048 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-364627/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1014 20:02:05.908881  399048 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-364627/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1014 20:02:05.938535  399048 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/test-preload-020721/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1014 20:02:05.968805  399048 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/test-preload-020721/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1014 20:02:05.999863  399048 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/test-preload-020721/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1014 20:02:06.030015  399048 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/test-preload-020721/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1014 20:02:06.060274  399048 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-364627/.minikube/files/etc/ssl/certs/3686342.pem --> /usr/share/ca-certificates/3686342.pem (1708 bytes)
	I1014 20:02:06.089769  399048 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-364627/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1014 20:02:06.120387  399048 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-364627/.minikube/certs/368634.pem --> /usr/share/ca-certificates/368634.pem (1338 bytes)
	I1014 20:02:06.150911  399048 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1014 20:02:06.172024  399048 ssh_runner.go:195] Run: openssl version
	I1014 20:02:06.178732  399048 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1014 20:02:06.192099  399048 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1014 20:02:06.197218  399048 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 14 19:11 /usr/share/ca-certificates/minikubeCA.pem
	I1014 20:02:06.197353  399048 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1014 20:02:06.204668  399048 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1014 20:02:06.217660  399048 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/368634.pem && ln -fs /usr/share/ca-certificates/368634.pem /etc/ssl/certs/368634.pem"
	I1014 20:02:06.231169  399048 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/368634.pem
	I1014 20:02:06.236558  399048 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 14 19:18 /usr/share/ca-certificates/368634.pem
	I1014 20:02:06.236642  399048 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/368634.pem
	I1014 20:02:06.244101  399048 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/368634.pem /etc/ssl/certs/51391683.0"
	I1014 20:02:06.257407  399048 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3686342.pem && ln -fs /usr/share/ca-certificates/3686342.pem /etc/ssl/certs/3686342.pem"
	I1014 20:02:06.271025  399048 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3686342.pem
	I1014 20:02:06.276497  399048 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 14 19:18 /usr/share/ca-certificates/3686342.pem
	I1014 20:02:06.276579  399048 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3686342.pem
	I1014 20:02:06.284210  399048 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3686342.pem /etc/ssl/certs/3ec20f2e.0"
	I1014 20:02:06.297494  399048 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1014 20:02:06.303087  399048 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1014 20:02:06.310964  399048 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1014 20:02:06.318305  399048 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1014 20:02:06.326169  399048 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1014 20:02:06.334397  399048 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1014 20:02:06.342121  399048 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1014 20:02:06.349751  399048 kubeadm.go:400] StartCluster: {Name:test-preload-020721 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
32.0 ClusterName:test-preload-020721 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.188 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1014 20:02:06.349857  399048 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1014 20:02:06.349950  399048 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1014 20:02:06.387964  399048 cri.go:89] found id: ""
	I1014 20:02:06.388052  399048 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1014 20:02:06.400901  399048 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1014 20:02:06.400926  399048 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1014 20:02:06.400985  399048 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1014 20:02:06.413080  399048 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1014 20:02:06.413684  399048 kubeconfig.go:47] verify endpoint returned: get endpoint: "test-preload-020721" does not appear in /home/jenkins/minikube-integration/21409-364627/kubeconfig
	I1014 20:02:06.413856  399048 kubeconfig.go:62] /home/jenkins/minikube-integration/21409-364627/kubeconfig needs updating (will repair): [kubeconfig missing "test-preload-020721" cluster setting kubeconfig missing "test-preload-020721" context setting]
	I1014 20:02:06.414247  399048 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-364627/kubeconfig: {Name:mkd77480770143eefcec102bea219ed6716fd3f5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 20:02:06.415095  399048 kapi.go:59] client config for test-preload-020721: &rest.Config{Host:"https://192.168.39.188:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21409-364627/.minikube/profiles/test-preload-020721/client.crt", KeyFile:"/home/jenkins/minikube-integration/21409-364627/.minikube/profiles/test-preload-020721/client.key", CAFile:"/home/jenkins/minikube-integration/21409-364627/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uin
t8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2819ca0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1014 20:02:06.415676  399048 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1014 20:02:06.415697  399048 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1014 20:02:06.415703  399048 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1014 20:02:06.415709  399048 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1014 20:02:06.415715  399048 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1014 20:02:06.416135  399048 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1014 20:02:06.428370  399048 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.39.188
	I1014 20:02:06.428415  399048 kubeadm.go:1160] stopping kube-system containers ...
	I1014 20:02:06.428430  399048 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1014 20:02:06.428488  399048 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1014 20:02:06.466132  399048 cri.go:89] found id: ""
	I1014 20:02:06.466225  399048 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1014 20:02:06.491010  399048 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1014 20:02:06.503329  399048 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1014 20:02:06.503351  399048 kubeadm.go:157] found existing configuration files:
	
	I1014 20:02:06.503401  399048 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1014 20:02:06.514812  399048 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1014 20:02:06.514874  399048 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1014 20:02:06.526865  399048 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1014 20:02:06.538386  399048 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1014 20:02:06.538450  399048 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1014 20:02:06.550119  399048 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1014 20:02:06.561460  399048 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1014 20:02:06.561530  399048 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1014 20:02:06.574357  399048 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1014 20:02:06.585657  399048 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1014 20:02:06.585733  399048 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1014 20:02:06.597387  399048 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1014 20:02:06.609736  399048 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1014 20:02:06.668756  399048 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1014 20:02:07.823728  399048 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.154929994s)
	I1014 20:02:07.823813  399048 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1014 20:02:08.079681  399048 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1014 20:02:08.148339  399048 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1014 20:02:08.228936  399048 api_server.go:52] waiting for apiserver process to appear ...
	I1014 20:02:08.229033  399048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 20:02:08.729277  399048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 20:02:09.229824  399048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 20:02:09.729750  399048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 20:02:10.229918  399048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 20:02:10.729219  399048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 20:02:11.229215  399048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 20:02:11.256592  399048 api_server.go:72] duration metric: took 3.027673173s to wait for apiserver process to appear ...
	I1014 20:02:11.256621  399048 api_server.go:88] waiting for apiserver healthz status ...
	I1014 20:02:11.256642  399048 api_server.go:253] Checking apiserver healthz at https://192.168.39.188:8443/healthz ...
	I1014 20:02:13.710729  399048 api_server.go:279] https://192.168.39.188:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1014 20:02:13.710768  399048 api_server.go:103] status: https://192.168.39.188:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1014 20:02:13.710787  399048 api_server.go:253] Checking apiserver healthz at https://192.168.39.188:8443/healthz ...
	I1014 20:02:13.762203  399048 api_server.go:279] https://192.168.39.188:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1014 20:02:13.762234  399048 api_server.go:103] status: https://192.168.39.188:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1014 20:02:13.762249  399048 api_server.go:253] Checking apiserver healthz at https://192.168.39.188:8443/healthz ...
	I1014 20:02:13.780738  399048 api_server.go:279] https://192.168.39.188:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1014 20:02:13.780786  399048 api_server.go:103] status: https://192.168.39.188:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1014 20:02:14.257453  399048 api_server.go:253] Checking apiserver healthz at https://192.168.39.188:8443/healthz ...
	I1014 20:02:14.263160  399048 api_server.go:279] https://192.168.39.188:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1014 20:02:14.263186  399048 api_server.go:103] status: https://192.168.39.188:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1014 20:02:14.756703  399048 api_server.go:253] Checking apiserver healthz at https://192.168.39.188:8443/healthz ...
	I1014 20:02:14.763092  399048 api_server.go:279] https://192.168.39.188:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1014 20:02:14.763132  399048 api_server.go:103] status: https://192.168.39.188:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1014 20:02:15.256798  399048 api_server.go:253] Checking apiserver healthz at https://192.168.39.188:8443/healthz ...
	I1014 20:02:15.261374  399048 api_server.go:279] https://192.168.39.188:8443/healthz returned 200:
	ok
	I1014 20:02:15.267828  399048 api_server.go:141] control plane version: v1.32.0
	I1014 20:02:15.267858  399048 api_server.go:131] duration metric: took 4.011230925s to wait for apiserver health ...
	I1014 20:02:15.267868  399048 cni.go:84] Creating CNI manager for ""
	I1014 20:02:15.267876  399048 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1014 20:02:15.269746  399048 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I1014 20:02:15.270984  399048 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1014 20:02:15.284155  399048 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1014 20:02:15.322881  399048 system_pods.go:43] waiting for kube-system pods to appear ...
	I1014 20:02:15.330033  399048 system_pods.go:59] 7 kube-system pods found
	I1014 20:02:15.330080  399048 system_pods.go:61] "coredns-668d6bf9bc-82859" [debb753d-345c-49da-bf8e-0a0d1fba55ea] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1014 20:02:15.330090  399048 system_pods.go:61] "etcd-test-preload-020721" [50de181d-de65-429e-b18a-b35453e95bdf] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1014 20:02:15.330099  399048 system_pods.go:61] "kube-apiserver-test-preload-020721" [66008b43-838b-4e23-a1a9-72668fa9cfda] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1014 20:02:15.330104  399048 system_pods.go:61] "kube-controller-manager-test-preload-020721" [73417839-828d-4299-9294-4b3a491c1b54] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1014 20:02:15.330111  399048 system_pods.go:61] "kube-proxy-pswdv" [79c3a517-6781-4979-b655-38762802ef65] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1014 20:02:15.330119  399048 system_pods.go:61] "kube-scheduler-test-preload-020721" [6857695c-5bdb-4cfd-a35d-add7430de889] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1014 20:02:15.330128  399048 system_pods.go:61] "storage-provisioner" [17881ef8-6db2-46bf-b883-f3cbb34053a4] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1014 20:02:15.330141  399048 system_pods.go:74] duration metric: took 7.232834ms to wait for pod list to return data ...
	I1014 20:02:15.330155  399048 node_conditions.go:102] verifying NodePressure condition ...
	I1014 20:02:15.340967  399048 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1014 20:02:15.340997  399048 node_conditions.go:123] node cpu capacity is 2
	I1014 20:02:15.341010  399048 node_conditions.go:105] duration metric: took 10.84999ms to run NodePressure ...
	I1014 20:02:15.341065  399048 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1014 20:02:15.603047  399048 kubeadm.go:728] waiting for restarted kubelet to initialise ...
	I1014 20:02:15.606874  399048 kubeadm.go:743] kubelet initialised
	I1014 20:02:15.606897  399048 kubeadm.go:744] duration metric: took 3.82301ms waiting for restarted kubelet to initialise ...
	I1014 20:02:15.606914  399048 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1014 20:02:15.623717  399048 ops.go:34] apiserver oom_adj: -16
	I1014 20:02:15.623753  399048 kubeadm.go:601] duration metric: took 9.222819927s to restartPrimaryControlPlane
	I1014 20:02:15.623768  399048 kubeadm.go:402] duration metric: took 9.274025951s to StartCluster
	I1014 20:02:15.623793  399048 settings.go:142] acquiring lock: {Name:mkb488b5c777750ffd68a70b951fb5c68c216ad7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 20:02:15.623902  399048 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21409-364627/kubeconfig
	I1014 20:02:15.624771  399048 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-364627/kubeconfig: {Name:mkd77480770143eefcec102bea219ed6716fd3f5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 20:02:15.625105  399048 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.188 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1014 20:02:15.625156  399048 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1014 20:02:15.625269  399048 addons.go:69] Setting storage-provisioner=true in profile "test-preload-020721"
	I1014 20:02:15.625292  399048 addons.go:238] Setting addon storage-provisioner=true in "test-preload-020721"
	W1014 20:02:15.625301  399048 addons.go:247] addon storage-provisioner should already be in state true
	I1014 20:02:15.625307  399048 addons.go:69] Setting default-storageclass=true in profile "test-preload-020721"
	I1014 20:02:15.625359  399048 config.go:182] Loaded profile config "test-preload-020721": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1014 20:02:15.625369  399048 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "test-preload-020721"
	I1014 20:02:15.625344  399048 host.go:66] Checking if "test-preload-020721" exists ...
	I1014 20:02:15.625873  399048 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 20:02:15.625880  399048 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 20:02:15.625927  399048 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 20:02:15.625975  399048 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 20:02:15.627574  399048 out.go:179] * Verifying Kubernetes components...
	I1014 20:02:15.628952  399048 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 20:02:15.640833  399048 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33181
	I1014 20:02:15.641240  399048 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44787
	I1014 20:02:15.641564  399048 main.go:141] libmachine: () Calling .GetVersion
	I1014 20:02:15.641816  399048 main.go:141] libmachine: () Calling .GetVersion
	I1014 20:02:15.642144  399048 main.go:141] libmachine: Using API Version  1
	I1014 20:02:15.642168  399048 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 20:02:15.642331  399048 main.go:141] libmachine: Using API Version  1
	I1014 20:02:15.642360  399048 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 20:02:15.642531  399048 main.go:141] libmachine: () Calling .GetMachineName
	I1014 20:02:15.642728  399048 main.go:141] libmachine: () Calling .GetMachineName
	I1014 20:02:15.642911  399048 main.go:141] libmachine: (test-preload-020721) Calling .GetState
	I1014 20:02:15.643129  399048 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 20:02:15.643177  399048 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 20:02:15.645635  399048 kapi.go:59] client config for test-preload-020721: &rest.Config{Host:"https://192.168.39.188:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21409-364627/.minikube/profiles/test-preload-020721/client.crt", KeyFile:"/home/jenkins/minikube-integration/21409-364627/.minikube/profiles/test-preload-020721/client.key", CAFile:"/home/jenkins/minikube-integration/21409-364627/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uin
t8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2819ca0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1014 20:02:15.645876  399048 addons.go:238] Setting addon default-storageclass=true in "test-preload-020721"
	W1014 20:02:15.645890  399048 addons.go:247] addon default-storageclass should already be in state true
	I1014 20:02:15.645954  399048 host.go:66] Checking if "test-preload-020721" exists ...
	I1014 20:02:15.646197  399048 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 20:02:15.646232  399048 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 20:02:15.657948  399048 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37715
	I1014 20:02:15.658599  399048 main.go:141] libmachine: () Calling .GetVersion
	I1014 20:02:15.659170  399048 main.go:141] libmachine: Using API Version  1
	I1014 20:02:15.659201  399048 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 20:02:15.659692  399048 main.go:141] libmachine: () Calling .GetMachineName
	I1014 20:02:15.659918  399048 main.go:141] libmachine: (test-preload-020721) Calling .GetState
	I1014 20:02:15.660265  399048 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39727
	I1014 20:02:15.660929  399048 main.go:141] libmachine: () Calling .GetVersion
	I1014 20:02:15.661453  399048 main.go:141] libmachine: Using API Version  1
	I1014 20:02:15.661479  399048 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 20:02:15.661866  399048 main.go:141] libmachine: () Calling .GetMachineName
	I1014 20:02:15.662177  399048 main.go:141] libmachine: (test-preload-020721) Calling .DriverName
	I1014 20:02:15.662545  399048 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 20:02:15.662595  399048 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 20:02:15.666454  399048 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1014 20:02:15.667840  399048 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1014 20:02:15.667869  399048 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1014 20:02:15.667897  399048 main.go:141] libmachine: (test-preload-020721) Calling .GetSSHHostname
	I1014 20:02:15.671871  399048 main.go:141] libmachine: (test-preload-020721) DBG | domain test-preload-020721 has defined MAC address 52:54:00:8d:6c:97 in network mk-test-preload-020721
	I1014 20:02:15.672397  399048 main.go:141] libmachine: (test-preload-020721) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:6c:97", ip: ""} in network mk-test-preload-020721: {Iface:virbr1 ExpiryTime:2025-10-14 21:01:56 +0000 UTC Type:0 Mac:52:54:00:8d:6c:97 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:test-preload-020721 Clientid:01:52:54:00:8d:6c:97}
	I1014 20:02:15.672426  399048 main.go:141] libmachine: (test-preload-020721) DBG | domain test-preload-020721 has defined IP address 192.168.39.188 and MAC address 52:54:00:8d:6c:97 in network mk-test-preload-020721
	I1014 20:02:15.672712  399048 main.go:141] libmachine: (test-preload-020721) Calling .GetSSHPort
	I1014 20:02:15.672910  399048 main.go:141] libmachine: (test-preload-020721) Calling .GetSSHKeyPath
	I1014 20:02:15.673109  399048 main.go:141] libmachine: (test-preload-020721) Calling .GetSSHUsername
	I1014 20:02:15.673286  399048 sshutil.go:53] new ssh client: &{IP:192.168.39.188 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21409-364627/.minikube/machines/test-preload-020721/id_rsa Username:docker}
	I1014 20:02:15.677550  399048 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35631
	I1014 20:02:15.678090  399048 main.go:141] libmachine: () Calling .GetVersion
	I1014 20:02:15.678692  399048 main.go:141] libmachine: Using API Version  1
	I1014 20:02:15.678723  399048 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 20:02:15.679049  399048 main.go:141] libmachine: () Calling .GetMachineName
	I1014 20:02:15.679233  399048 main.go:141] libmachine: (test-preload-020721) Calling .GetState
	I1014 20:02:15.681236  399048 main.go:141] libmachine: (test-preload-020721) Calling .DriverName
	I1014 20:02:15.681498  399048 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1014 20:02:15.681519  399048 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1014 20:02:15.681548  399048 main.go:141] libmachine: (test-preload-020721) Calling .GetSSHHostname
	I1014 20:02:15.684825  399048 main.go:141] libmachine: (test-preload-020721) DBG | domain test-preload-020721 has defined MAC address 52:54:00:8d:6c:97 in network mk-test-preload-020721
	I1014 20:02:15.685303  399048 main.go:141] libmachine: (test-preload-020721) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:6c:97", ip: ""} in network mk-test-preload-020721: {Iface:virbr1 ExpiryTime:2025-10-14 21:01:56 +0000 UTC Type:0 Mac:52:54:00:8d:6c:97 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:test-preload-020721 Clientid:01:52:54:00:8d:6c:97}
	I1014 20:02:15.685352  399048 main.go:141] libmachine: (test-preload-020721) DBG | domain test-preload-020721 has defined IP address 192.168.39.188 and MAC address 52:54:00:8d:6c:97 in network mk-test-preload-020721
	I1014 20:02:15.685541  399048 main.go:141] libmachine: (test-preload-020721) Calling .GetSSHPort
	I1014 20:02:15.685741  399048 main.go:141] libmachine: (test-preload-020721) Calling .GetSSHKeyPath
	I1014 20:02:15.685862  399048 main.go:141] libmachine: (test-preload-020721) Calling .GetSSHUsername
	I1014 20:02:15.685972  399048 sshutil.go:53] new ssh client: &{IP:192.168.39.188 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21409-364627/.minikube/machines/test-preload-020721/id_rsa Username:docker}
	I1014 20:02:15.847725  399048 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1014 20:02:15.875796  399048 node_ready.go:35] waiting up to 6m0s for node "test-preload-020721" to be "Ready" ...
	I1014 20:02:15.879986  399048 node_ready.go:49] node "test-preload-020721" is "Ready"
	I1014 20:02:15.880025  399048 node_ready.go:38] duration metric: took 4.174636ms for node "test-preload-020721" to be "Ready" ...
	I1014 20:02:15.880041  399048 api_server.go:52] waiting for apiserver process to appear ...
	I1014 20:02:15.880090  399048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 20:02:15.903923  399048 api_server.go:72] duration metric: took 278.774854ms to wait for apiserver process to appear ...
	I1014 20:02:15.903953  399048 api_server.go:88] waiting for apiserver healthz status ...
	I1014 20:02:15.903972  399048 api_server.go:253] Checking apiserver healthz at https://192.168.39.188:8443/healthz ...
	I1014 20:02:15.910371  399048 api_server.go:279] https://192.168.39.188:8443/healthz returned 200:
	ok
	I1014 20:02:15.912549  399048 api_server.go:141] control plane version: v1.32.0
	I1014 20:02:15.912576  399048 api_server.go:131] duration metric: took 8.615539ms to wait for apiserver health ...
	I1014 20:02:15.912589  399048 system_pods.go:43] waiting for kube-system pods to appear ...
	I1014 20:02:15.916835  399048 system_pods.go:59] 7 kube-system pods found
	I1014 20:02:15.916866  399048 system_pods.go:61] "coredns-668d6bf9bc-82859" [debb753d-345c-49da-bf8e-0a0d1fba55ea] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1014 20:02:15.916875  399048 system_pods.go:61] "etcd-test-preload-020721" [50de181d-de65-429e-b18a-b35453e95bdf] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1014 20:02:15.916887  399048 system_pods.go:61] "kube-apiserver-test-preload-020721" [66008b43-838b-4e23-a1a9-72668fa9cfda] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1014 20:02:15.916895  399048 system_pods.go:61] "kube-controller-manager-test-preload-020721" [73417839-828d-4299-9294-4b3a491c1b54] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1014 20:02:15.916901  399048 system_pods.go:61] "kube-proxy-pswdv" [79c3a517-6781-4979-b655-38762802ef65] Running
	I1014 20:02:15.916910  399048 system_pods.go:61] "kube-scheduler-test-preload-020721" [6857695c-5bdb-4cfd-a35d-add7430de889] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1014 20:02:15.916915  399048 system_pods.go:61] "storage-provisioner" [17881ef8-6db2-46bf-b883-f3cbb34053a4] Running
	I1014 20:02:15.916925  399048 system_pods.go:74] duration metric: took 4.329192ms to wait for pod list to return data ...
	I1014 20:02:15.916937  399048 default_sa.go:34] waiting for default service account to be created ...
	I1014 20:02:15.918666  399048 default_sa.go:45] found service account: "default"
	I1014 20:02:15.918688  399048 default_sa.go:55] duration metric: took 1.744318ms for default service account to be created ...
	I1014 20:02:15.918698  399048 system_pods.go:116] waiting for k8s-apps to be running ...
	I1014 20:02:15.922535  399048 system_pods.go:86] 7 kube-system pods found
	I1014 20:02:15.922573  399048 system_pods.go:89] "coredns-668d6bf9bc-82859" [debb753d-345c-49da-bf8e-0a0d1fba55ea] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1014 20:02:15.922584  399048 system_pods.go:89] "etcd-test-preload-020721" [50de181d-de65-429e-b18a-b35453e95bdf] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1014 20:02:15.922595  399048 system_pods.go:89] "kube-apiserver-test-preload-020721" [66008b43-838b-4e23-a1a9-72668fa9cfda] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1014 20:02:15.922604  399048 system_pods.go:89] "kube-controller-manager-test-preload-020721" [73417839-828d-4299-9294-4b3a491c1b54] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1014 20:02:15.922611  399048 system_pods.go:89] "kube-proxy-pswdv" [79c3a517-6781-4979-b655-38762802ef65] Running
	I1014 20:02:15.922621  399048 system_pods.go:89] "kube-scheduler-test-preload-020721" [6857695c-5bdb-4cfd-a35d-add7430de889] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1014 20:02:15.922627  399048 system_pods.go:89] "storage-provisioner" [17881ef8-6db2-46bf-b883-f3cbb34053a4] Running
	I1014 20:02:15.922637  399048 system_pods.go:126] duration metric: took 3.931799ms to wait for k8s-apps to be running ...
	I1014 20:02:15.922650  399048 system_svc.go:44] waiting for kubelet service to be running ....
	I1014 20:02:15.922702  399048 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1014 20:02:15.945634  399048 system_svc.go:56] duration metric: took 22.968911ms WaitForService to wait for kubelet
	I1014 20:02:15.945677  399048 kubeadm.go:586] duration metric: took 320.531772ms to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1014 20:02:15.945708  399048 node_conditions.go:102] verifying NodePressure condition ...
	I1014 20:02:15.947647  399048 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1014 20:02:15.947682  399048 node_conditions.go:123] node cpu capacity is 2
	I1014 20:02:15.947699  399048 node_conditions.go:105] duration metric: took 1.983578ms to run NodePressure ...
	I1014 20:02:15.947715  399048 start.go:241] waiting for startup goroutines ...
	I1014 20:02:16.027843  399048 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1014 20:02:16.041365  399048 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1014 20:02:16.739736  399048 main.go:141] libmachine: Making call to close driver server
	I1014 20:02:16.739775  399048 main.go:141] libmachine: (test-preload-020721) Calling .Close
	I1014 20:02:16.739786  399048 main.go:141] libmachine: Making call to close driver server
	I1014 20:02:16.739809  399048 main.go:141] libmachine: (test-preload-020721) Calling .Close
	I1014 20:02:16.740094  399048 main.go:141] libmachine: Successfully made call to close driver server
	I1014 20:02:16.740110  399048 main.go:141] libmachine: Making call to close connection to plugin binary
	I1014 20:02:16.740119  399048 main.go:141] libmachine: Making call to close driver server
	I1014 20:02:16.740127  399048 main.go:141] libmachine: (test-preload-020721) Calling .Close
	I1014 20:02:16.740188  399048 main.go:141] libmachine: (test-preload-020721) DBG | Closing plugin on server side
	I1014 20:02:16.740205  399048 main.go:141] libmachine: Successfully made call to close driver server
	I1014 20:02:16.740225  399048 main.go:141] libmachine: Making call to close connection to plugin binary
	I1014 20:02:16.740234  399048 main.go:141] libmachine: Making call to close driver server
	I1014 20:02:16.740241  399048 main.go:141] libmachine: (test-preload-020721) Calling .Close
	I1014 20:02:16.740446  399048 main.go:141] libmachine: Successfully made call to close driver server
	I1014 20:02:16.740464  399048 main.go:141] libmachine: Making call to close connection to plugin binary
	I1014 20:02:16.740489  399048 main.go:141] libmachine: (test-preload-020721) DBG | Closing plugin on server side
	I1014 20:02:16.740495  399048 main.go:141] libmachine: Successfully made call to close driver server
	I1014 20:02:16.740502  399048 main.go:141] libmachine: Making call to close connection to plugin binary
	I1014 20:02:16.747427  399048 main.go:141] libmachine: Making call to close driver server
	I1014 20:02:16.747446  399048 main.go:141] libmachine: (test-preload-020721) Calling .Close
	I1014 20:02:16.747774  399048 main.go:141] libmachine: Successfully made call to close driver server
	I1014 20:02:16.747793  399048 main.go:141] libmachine: (test-preload-020721) DBG | Closing plugin on server side
	I1014 20:02:16.747803  399048 main.go:141] libmachine: Making call to close connection to plugin binary
	I1014 20:02:16.749594  399048 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1014 20:02:16.750772  399048 addons.go:514] duration metric: took 1.12562922s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1014 20:02:16.750814  399048 start.go:246] waiting for cluster config update ...
	I1014 20:02:16.750830  399048 start.go:255] writing updated cluster config ...
	I1014 20:02:16.751065  399048 ssh_runner.go:195] Run: rm -f paused
	I1014 20:02:16.756468  399048 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1014 20:02:16.756943  399048 kapi.go:59] client config for test-preload-020721: &rest.Config{Host:"https://192.168.39.188:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21409-364627/.minikube/profiles/test-preload-020721/client.crt", KeyFile:"/home/jenkins/minikube-integration/21409-364627/.minikube/profiles/test-preload-020721/client.key", CAFile:"/home/jenkins/minikube-integration/21409-364627/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uin
t8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2819ca0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1014 20:02:16.759799  399048 pod_ready.go:83] waiting for pod "coredns-668d6bf9bc-82859" in "kube-system" namespace to be "Ready" or be gone ...
	W1014 20:02:18.765825  399048 pod_ready.go:104] pod "coredns-668d6bf9bc-82859" is not "Ready", error: <nil>
	W1014 20:02:20.766190  399048 pod_ready.go:104] pod "coredns-668d6bf9bc-82859" is not "Ready", error: <nil>
	I1014 20:02:23.266090  399048 pod_ready.go:94] pod "coredns-668d6bf9bc-82859" is "Ready"
	I1014 20:02:23.266132  399048 pod_ready.go:86] duration metric: took 6.506312317s for pod "coredns-668d6bf9bc-82859" in "kube-system" namespace to be "Ready" or be gone ...
	I1014 20:02:23.269015  399048 pod_ready.go:83] waiting for pod "etcd-test-preload-020721" in "kube-system" namespace to be "Ready" or be gone ...
	I1014 20:02:23.274443  399048 pod_ready.go:94] pod "etcd-test-preload-020721" is "Ready"
	I1014 20:02:23.274465  399048 pod_ready.go:86] duration metric: took 5.427563ms for pod "etcd-test-preload-020721" in "kube-system" namespace to be "Ready" or be gone ...
	I1014 20:02:23.276583  399048 pod_ready.go:83] waiting for pod "kube-apiserver-test-preload-020721" in "kube-system" namespace to be "Ready" or be gone ...
	W1014 20:02:25.283027  399048 pod_ready.go:104] pod "kube-apiserver-test-preload-020721" is not "Ready", error: <nil>
	W1014 20:02:27.783370  399048 pod_ready.go:104] pod "kube-apiserver-test-preload-020721" is not "Ready", error: <nil>
	I1014 20:02:29.782718  399048 pod_ready.go:94] pod "kube-apiserver-test-preload-020721" is "Ready"
	I1014 20:02:29.782746  399048 pod_ready.go:86] duration metric: took 6.506145247s for pod "kube-apiserver-test-preload-020721" in "kube-system" namespace to be "Ready" or be gone ...
	I1014 20:02:29.784949  399048 pod_ready.go:83] waiting for pod "kube-controller-manager-test-preload-020721" in "kube-system" namespace to be "Ready" or be gone ...
	I1014 20:02:29.788984  399048 pod_ready.go:94] pod "kube-controller-manager-test-preload-020721" is "Ready"
	I1014 20:02:29.789007  399048 pod_ready.go:86] duration metric: took 4.036314ms for pod "kube-controller-manager-test-preload-020721" in "kube-system" namespace to be "Ready" or be gone ...
	I1014 20:02:29.791248  399048 pod_ready.go:83] waiting for pod "kube-proxy-pswdv" in "kube-system" namespace to be "Ready" or be gone ...
	I1014 20:02:29.795195  399048 pod_ready.go:94] pod "kube-proxy-pswdv" is "Ready"
	I1014 20:02:29.795213  399048 pod_ready.go:86] duration metric: took 3.944571ms for pod "kube-proxy-pswdv" in "kube-system" namespace to be "Ready" or be gone ...
	I1014 20:02:29.797667  399048 pod_ready.go:83] waiting for pod "kube-scheduler-test-preload-020721" in "kube-system" namespace to be "Ready" or be gone ...
	I1014 20:02:29.980220  399048 pod_ready.go:94] pod "kube-scheduler-test-preload-020721" is "Ready"
	I1014 20:02:29.980263  399048 pod_ready.go:86] duration metric: took 182.571485ms for pod "kube-scheduler-test-preload-020721" in "kube-system" namespace to be "Ready" or be gone ...
	I1014 20:02:29.980283  399048 pod_ready.go:40] duration metric: took 13.223786485s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1014 20:02:30.022688  399048 start.go:624] kubectl: 1.34.1, cluster: 1.32.0 (minor skew: 2)
	I1014 20:02:30.024368  399048 out.go:203] 
	W1014 20:02:30.025452  399048 out.go:285] ! /usr/local/bin/kubectl is version 1.34.1, which may have incompatibilities with Kubernetes 1.32.0.
	I1014 20:02:30.026453  399048 out.go:179]   - Want kubectl v1.32.0? Try 'minikube kubectl -- get pods -A'
	I1014 20:02:30.027610  399048 out.go:179] * Done! kubectl is now configured to use "test-preload-020721" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 14 20:02:30 test-preload-020721 crio[828]: time="2025-10-14 20:02:30.890655669Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1760472150890633294,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=9437e5ca-1dab-45cb-9b88-7d53db958d7f name=/runtime.v1.ImageService/ImageFsInfo
	Oct 14 20:02:30 test-preload-020721 crio[828]: time="2025-10-14 20:02:30.891305362Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=bb898c5c-8dd6-4b91-a210-326439b51e3f name=/runtime.v1.RuntimeService/ListContainers
	Oct 14 20:02:30 test-preload-020721 crio[828]: time="2025-10-14 20:02:30.891534382Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=bb898c5c-8dd6-4b91-a210-326439b51e3f name=/runtime.v1.RuntimeService/ListContainers
	Oct 14 20:02:30 test-preload-020721 crio[828]: time="2025-10-14 20:02:30.891908250Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:00e9b044d5d035665bb6fa4930a9075fc5d60f2443c6b1f0e1eb743821b4ccb3,PodSandboxId:361db2e10f09b3c7a0d8d3de8e3f2dfc00c3bce538b50721fe8afba7936972fc,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1760472138202090885,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-82859,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: debb753d-345c-49da-bf8e-0a0d1fba55ea,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4e5eb9439755f4f67e9367b2da21d79b384e27ad5fee1c6de2588d26babc4c27,PodSandboxId:3f8b79c1c280f9943afce11b1528ba1551ed700cc5534442d7337492b2381d84,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,State:CONTAINER_RUNNING,CreatedAt:1760472134573367794,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pswdv,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 79c3a517-6781-4979-b655-38762802ef65,},Annotations:map[string]string{io.kubernetes.container.hash: 8f247ea6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7e86a92e6b9e69c35b61445f32b994a33f1f5efc5a7d36ac13f1c53211d4e4cc,PodSandboxId:a16c1755b06a7ba5916ca780f3b5625fa47c5e724c2bd707091f9b4a68023d8c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1760472134559873060,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 17
881ef8-6db2-46bf-b883-f3cbb34053a4,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca5e1dec955879a80bda64a6ff8eae16ac8478f341c59c23faa2d983f8818d46,PodSandboxId:8fd05ddd406b87d9a0f2e2f9d04083a10c6e00642250ccde96dc6118523b1928,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,State:CONTAINER_RUNNING,CreatedAt:1760472130599406626,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-020721,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aceccc797
9999a1630c09022ccd1d5dc,},Annotations:map[string]string{io.kubernetes.container.hash: 8c4b12d6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3d6382b6e1eeef77f0f4c73b5eaae291a26652edf9449c9e98e77916f558e056,PodSandboxId:e12e07dbb98900e4f053abce993eac26f897cacbc1f184b627fdae41a6349c5b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,State:CONTAINER_RUNNING,CreatedAt:1760472130573372086,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-020721,io.kubernetes.pod.namespace: kube-system,io.kubernetes.po
d.uid: cd191caab1b70abc6e043cfb9110a553,},Annotations:map[string]string{io.kubernetes.container.hash: 99f3a73e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e961ff40e5ef27bab9fdb130c66b23cba23b16c306c85c8778ce7467846a932,PodSandboxId:798a2edcff1f29c90cb2ea233f3f906d0c459276e40a234e4c2b2ceea18e5408,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,State:CONTAINER_RUNNING,CreatedAt:1760472130529368845,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-020721,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a52d
a1a9845b99bc638d444311f2024e,},Annotations:map[string]string{io.kubernetes.container.hash: bf915d6a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d08e653f7ce44086eb502c5f25493ce0c79c6dd98356252fc22a8e12958e5dd,PodSandboxId:de318ecc92d6be04bebee9c29d2357e4137011b8dbedccf829f7cd41a3b18907,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1760472130507717532,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-020721,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ad91e0511ecc1ad754710e08ea21b1b,},Annotation
s:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=bb898c5c-8dd6-4b91-a210-326439b51e3f name=/runtime.v1.RuntimeService/ListContainers
	Oct 14 20:02:30 test-preload-020721 crio[828]: time="2025-10-14 20:02:30.929650086Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=76a5ddf5-1c34-4476-b758-e69f8b3ef3ff name=/runtime.v1.RuntimeService/Version
	Oct 14 20:02:30 test-preload-020721 crio[828]: time="2025-10-14 20:02:30.929922525Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=76a5ddf5-1c34-4476-b758-e69f8b3ef3ff name=/runtime.v1.RuntimeService/Version
	Oct 14 20:02:30 test-preload-020721 crio[828]: time="2025-10-14 20:02:30.931195840Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d5c54561-ca9b-476e-a11c-1627807d433c name=/runtime.v1.ImageService/ImageFsInfo
	Oct 14 20:02:30 test-preload-020721 crio[828]: time="2025-10-14 20:02:30.931698017Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1760472150931677844,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d5c54561-ca9b-476e-a11c-1627807d433c name=/runtime.v1.ImageService/ImageFsInfo
	Oct 14 20:02:30 test-preload-020721 crio[828]: time="2025-10-14 20:02:30.932153176Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=eebeb43e-8ce6-42e6-a29b-9747583d147f name=/runtime.v1.RuntimeService/ListContainers
	Oct 14 20:02:30 test-preload-020721 crio[828]: time="2025-10-14 20:02:30.932299346Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=eebeb43e-8ce6-42e6-a29b-9747583d147f name=/runtime.v1.RuntimeService/ListContainers
	Oct 14 20:02:30 test-preload-020721 crio[828]: time="2025-10-14 20:02:30.932800825Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:00e9b044d5d035665bb6fa4930a9075fc5d60f2443c6b1f0e1eb743821b4ccb3,PodSandboxId:361db2e10f09b3c7a0d8d3de8e3f2dfc00c3bce538b50721fe8afba7936972fc,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1760472138202090885,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-82859,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: debb753d-345c-49da-bf8e-0a0d1fba55ea,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4e5eb9439755f4f67e9367b2da21d79b384e27ad5fee1c6de2588d26babc4c27,PodSandboxId:3f8b79c1c280f9943afce11b1528ba1551ed700cc5534442d7337492b2381d84,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,State:CONTAINER_RUNNING,CreatedAt:1760472134573367794,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pswdv,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 79c3a517-6781-4979-b655-38762802ef65,},Annotations:map[string]string{io.kubernetes.container.hash: 8f247ea6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7e86a92e6b9e69c35b61445f32b994a33f1f5efc5a7d36ac13f1c53211d4e4cc,PodSandboxId:a16c1755b06a7ba5916ca780f3b5625fa47c5e724c2bd707091f9b4a68023d8c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1760472134559873060,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 17
881ef8-6db2-46bf-b883-f3cbb34053a4,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca5e1dec955879a80bda64a6ff8eae16ac8478f341c59c23faa2d983f8818d46,PodSandboxId:8fd05ddd406b87d9a0f2e2f9d04083a10c6e00642250ccde96dc6118523b1928,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,State:CONTAINER_RUNNING,CreatedAt:1760472130599406626,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-020721,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aceccc797
9999a1630c09022ccd1d5dc,},Annotations:map[string]string{io.kubernetes.container.hash: 8c4b12d6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3d6382b6e1eeef77f0f4c73b5eaae291a26652edf9449c9e98e77916f558e056,PodSandboxId:e12e07dbb98900e4f053abce993eac26f897cacbc1f184b627fdae41a6349c5b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,State:CONTAINER_RUNNING,CreatedAt:1760472130573372086,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-020721,io.kubernetes.pod.namespace: kube-system,io.kubernetes.po
d.uid: cd191caab1b70abc6e043cfb9110a553,},Annotations:map[string]string{io.kubernetes.container.hash: 99f3a73e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e961ff40e5ef27bab9fdb130c66b23cba23b16c306c85c8778ce7467846a932,PodSandboxId:798a2edcff1f29c90cb2ea233f3f906d0c459276e40a234e4c2b2ceea18e5408,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,State:CONTAINER_RUNNING,CreatedAt:1760472130529368845,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-020721,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a52d
a1a9845b99bc638d444311f2024e,},Annotations:map[string]string{io.kubernetes.container.hash: bf915d6a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d08e653f7ce44086eb502c5f25493ce0c79c6dd98356252fc22a8e12958e5dd,PodSandboxId:de318ecc92d6be04bebee9c29d2357e4137011b8dbedccf829f7cd41a3b18907,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1760472130507717532,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-020721,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ad91e0511ecc1ad754710e08ea21b1b,},Annotation
s:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=eebeb43e-8ce6-42e6-a29b-9747583d147f name=/runtime.v1.RuntimeService/ListContainers
	Oct 14 20:02:30 test-preload-020721 crio[828]: time="2025-10-14 20:02:30.973286238Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=640e4af2-6ef8-4780-aef3-94e030d12766 name=/runtime.v1.RuntimeService/Version
	Oct 14 20:02:30 test-preload-020721 crio[828]: time="2025-10-14 20:02:30.973373113Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=640e4af2-6ef8-4780-aef3-94e030d12766 name=/runtime.v1.RuntimeService/Version
	Oct 14 20:02:30 test-preload-020721 crio[828]: time="2025-10-14 20:02:30.974834873Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=a0f93800-52f2-439c-b3e4-422017f85c78 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 14 20:02:30 test-preload-020721 crio[828]: time="2025-10-14 20:02:30.975257302Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1760472150975235562,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a0f93800-52f2-439c-b3e4-422017f85c78 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 14 20:02:30 test-preload-020721 crio[828]: time="2025-10-14 20:02:30.975737536Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0f7cdb75-9ece-4828-a3f8-038f00c7c760 name=/runtime.v1.RuntimeService/ListContainers
	Oct 14 20:02:30 test-preload-020721 crio[828]: time="2025-10-14 20:02:30.975964199Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0f7cdb75-9ece-4828-a3f8-038f00c7c760 name=/runtime.v1.RuntimeService/ListContainers
	Oct 14 20:02:30 test-preload-020721 crio[828]: time="2025-10-14 20:02:30.976358974Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:00e9b044d5d035665bb6fa4930a9075fc5d60f2443c6b1f0e1eb743821b4ccb3,PodSandboxId:361db2e10f09b3c7a0d8d3de8e3f2dfc00c3bce538b50721fe8afba7936972fc,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1760472138202090885,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-82859,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: debb753d-345c-49da-bf8e-0a0d1fba55ea,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4e5eb9439755f4f67e9367b2da21d79b384e27ad5fee1c6de2588d26babc4c27,PodSandboxId:3f8b79c1c280f9943afce11b1528ba1551ed700cc5534442d7337492b2381d84,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,State:CONTAINER_RUNNING,CreatedAt:1760472134573367794,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pswdv,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 79c3a517-6781-4979-b655-38762802ef65,},Annotations:map[string]string{io.kubernetes.container.hash: 8f247ea6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7e86a92e6b9e69c35b61445f32b994a33f1f5efc5a7d36ac13f1c53211d4e4cc,PodSandboxId:a16c1755b06a7ba5916ca780f3b5625fa47c5e724c2bd707091f9b4a68023d8c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1760472134559873060,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 17
881ef8-6db2-46bf-b883-f3cbb34053a4,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca5e1dec955879a80bda64a6ff8eae16ac8478f341c59c23faa2d983f8818d46,PodSandboxId:8fd05ddd406b87d9a0f2e2f9d04083a10c6e00642250ccde96dc6118523b1928,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,State:CONTAINER_RUNNING,CreatedAt:1760472130599406626,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-020721,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aceccc797
9999a1630c09022ccd1d5dc,},Annotations:map[string]string{io.kubernetes.container.hash: 8c4b12d6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3d6382b6e1eeef77f0f4c73b5eaae291a26652edf9449c9e98e77916f558e056,PodSandboxId:e12e07dbb98900e4f053abce993eac26f897cacbc1f184b627fdae41a6349c5b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,State:CONTAINER_RUNNING,CreatedAt:1760472130573372086,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-020721,io.kubernetes.pod.namespace: kube-system,io.kubernetes.po
d.uid: cd191caab1b70abc6e043cfb9110a553,},Annotations:map[string]string{io.kubernetes.container.hash: 99f3a73e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e961ff40e5ef27bab9fdb130c66b23cba23b16c306c85c8778ce7467846a932,PodSandboxId:798a2edcff1f29c90cb2ea233f3f906d0c459276e40a234e4c2b2ceea18e5408,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,State:CONTAINER_RUNNING,CreatedAt:1760472130529368845,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-020721,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a52d
a1a9845b99bc638d444311f2024e,},Annotations:map[string]string{io.kubernetes.container.hash: bf915d6a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d08e653f7ce44086eb502c5f25493ce0c79c6dd98356252fc22a8e12958e5dd,PodSandboxId:de318ecc92d6be04bebee9c29d2357e4137011b8dbedccf829f7cd41a3b18907,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1760472130507717532,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-020721,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ad91e0511ecc1ad754710e08ea21b1b,},Annotation
s:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=0f7cdb75-9ece-4828-a3f8-038f00c7c760 name=/runtime.v1.RuntimeService/ListContainers
	Oct 14 20:02:31 test-preload-020721 crio[828]: time="2025-10-14 20:02:31.011174091Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=219db9a6-a917-4725-98cf-58f2e21f895e name=/runtime.v1.RuntimeService/Version
	Oct 14 20:02:31 test-preload-020721 crio[828]: time="2025-10-14 20:02:31.011246193Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=219db9a6-a917-4725-98cf-58f2e21f895e name=/runtime.v1.RuntimeService/Version
	Oct 14 20:02:31 test-preload-020721 crio[828]: time="2025-10-14 20:02:31.012630260Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=43a7685f-940e-447b-83eb-37025dbe4a6a name=/runtime.v1.ImageService/ImageFsInfo
	Oct 14 20:02:31 test-preload-020721 crio[828]: time="2025-10-14 20:02:31.013038748Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1760472151013018174,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=43a7685f-940e-447b-83eb-37025dbe4a6a name=/runtime.v1.ImageService/ImageFsInfo
	Oct 14 20:02:31 test-preload-020721 crio[828]: time="2025-10-14 20:02:31.013587700Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4abb80c7-232d-44f8-b4fe-de4ec6eafe5c name=/runtime.v1.RuntimeService/ListContainers
	Oct 14 20:02:31 test-preload-020721 crio[828]: time="2025-10-14 20:02:31.013660361Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4abb80c7-232d-44f8-b4fe-de4ec6eafe5c name=/runtime.v1.RuntimeService/ListContainers
	Oct 14 20:02:31 test-preload-020721 crio[828]: time="2025-10-14 20:02:31.013836920Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:00e9b044d5d035665bb6fa4930a9075fc5d60f2443c6b1f0e1eb743821b4ccb3,PodSandboxId:361db2e10f09b3c7a0d8d3de8e3f2dfc00c3bce538b50721fe8afba7936972fc,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1760472138202090885,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-82859,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: debb753d-345c-49da-bf8e-0a0d1fba55ea,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4e5eb9439755f4f67e9367b2da21d79b384e27ad5fee1c6de2588d26babc4c27,PodSandboxId:3f8b79c1c280f9943afce11b1528ba1551ed700cc5534442d7337492b2381d84,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,State:CONTAINER_RUNNING,CreatedAt:1760472134573367794,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pswdv,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 79c3a517-6781-4979-b655-38762802ef65,},Annotations:map[string]string{io.kubernetes.container.hash: 8f247ea6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7e86a92e6b9e69c35b61445f32b994a33f1f5efc5a7d36ac13f1c53211d4e4cc,PodSandboxId:a16c1755b06a7ba5916ca780f3b5625fa47c5e724c2bd707091f9b4a68023d8c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1760472134559873060,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 17
881ef8-6db2-46bf-b883-f3cbb34053a4,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca5e1dec955879a80bda64a6ff8eae16ac8478f341c59c23faa2d983f8818d46,PodSandboxId:8fd05ddd406b87d9a0f2e2f9d04083a10c6e00642250ccde96dc6118523b1928,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,State:CONTAINER_RUNNING,CreatedAt:1760472130599406626,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-020721,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aceccc797
9999a1630c09022ccd1d5dc,},Annotations:map[string]string{io.kubernetes.container.hash: 8c4b12d6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3d6382b6e1eeef77f0f4c73b5eaae291a26652edf9449c9e98e77916f558e056,PodSandboxId:e12e07dbb98900e4f053abce993eac26f897cacbc1f184b627fdae41a6349c5b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,State:CONTAINER_RUNNING,CreatedAt:1760472130573372086,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-020721,io.kubernetes.pod.namespace: kube-system,io.kubernetes.po
d.uid: cd191caab1b70abc6e043cfb9110a553,},Annotations:map[string]string{io.kubernetes.container.hash: 99f3a73e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e961ff40e5ef27bab9fdb130c66b23cba23b16c306c85c8778ce7467846a932,PodSandboxId:798a2edcff1f29c90cb2ea233f3f906d0c459276e40a234e4c2b2ceea18e5408,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,State:CONTAINER_RUNNING,CreatedAt:1760472130529368845,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-020721,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a52d
a1a9845b99bc638d444311f2024e,},Annotations:map[string]string{io.kubernetes.container.hash: bf915d6a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d08e653f7ce44086eb502c5f25493ce0c79c6dd98356252fc22a8e12958e5dd,PodSandboxId:de318ecc92d6be04bebee9c29d2357e4137011b8dbedccf829f7cd41a3b18907,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1760472130507717532,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-020721,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ad91e0511ecc1ad754710e08ea21b1b,},Annotation
s:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=4abb80c7-232d-44f8-b4fe-de4ec6eafe5c name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	00e9b044d5d03       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   12 seconds ago      Running             coredns                   1                   361db2e10f09b       coredns-668d6bf9bc-82859
	4e5eb9439755f       040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08   16 seconds ago      Running             kube-proxy                1                   3f8b79c1c280f       kube-proxy-pswdv
	7e86a92e6b9e6       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   16 seconds ago      Running             storage-provisioner       2                   a16c1755b06a7       storage-provisioner
	ca5e1dec95587       a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5   20 seconds ago      Running             kube-scheduler            1                   8fd05ddd406b8       kube-scheduler-test-preload-020721
	3d6382b6e1eee       8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3   20 seconds ago      Running             kube-controller-manager   1                   e12e07dbb9890       kube-controller-manager-test-preload-020721
	8e961ff40e5ef       c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4   20 seconds ago      Running             kube-apiserver            1                   798a2edcff1f2       kube-apiserver-test-preload-020721
	9d08e653f7ce4       a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc   20 seconds ago      Running             etcd                      1                   de318ecc92d6b       etcd-test-preload-020721
	
	
	==> coredns [00e9b044d5d035665bb6fa4930a9075fc5d60f2443c6b1f0e1eb743821b4ccb3] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:52035 - 3874 "HINFO IN 50770022018107120.5404682392655784285. udp 55 false 512" NXDOMAIN qr,rd,ra 130 0.029100908s
	
	
	==> describe nodes <==
	Name:               test-preload-020721
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=test-preload-020721
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=93e78e130b0f4e4236c23940daf4ba8b68c76662
	                    minikube.k8s.io/name=test-preload-020721
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_14T20_00_44_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 14 Oct 2025 20:00:41 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  test-preload-020721
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 14 Oct 2025 20:02:23 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 14 Oct 2025 20:02:15 +0000   Tue, 14 Oct 2025 20:00:40 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 14 Oct 2025 20:02:15 +0000   Tue, 14 Oct 2025 20:00:40 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 14 Oct 2025 20:02:15 +0000   Tue, 14 Oct 2025 20:00:40 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 14 Oct 2025 20:02:15 +0000   Tue, 14 Oct 2025 20:02:15 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.188
	  Hostname:    test-preload-020721
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3042708Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3042708Ki
	  pods:               110
	System Info:
	  Machine ID:                 53224ae3587a4322a467b5facc589ec4
	  System UUID:                53224ae3-587a-4322-a467-b5facc589ec4
	  Boot ID:                    2d92928f-2405-4129-baa3-39f03d529137
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.32.0
	  Kube-Proxy Version:         v1.32.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                           CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                           ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-668d6bf9bc-82859                       100m (5%)     0 (0%)      70Mi (2%)        170Mi (5%)     102s
	  kube-system                 etcd-test-preload-020721                       100m (5%)     0 (0%)      100Mi (3%)       0 (0%)         107s
	  kube-system                 kube-apiserver-test-preload-020721             250m (12%)    0 (0%)      0 (0%)           0 (0%)         107s
	  kube-system                 kube-controller-manager-test-preload-020721    200m (10%)    0 (0%)      0 (0%)           0 (0%)         107s
	  kube-system                 kube-proxy-pswdv                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         102s
	  kube-system                 kube-scheduler-test-preload-020721             100m (5%)     0 (0%)      0 (0%)           0 (0%)         107s
	  kube-system                 storage-provisioner                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         101s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (5%)  170Mi (5%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 99s                kube-proxy       
	  Normal   Starting                 16s                kube-proxy       
	  Normal   NodeHasSufficientMemory  107s               kubelet          Node test-preload-020721 status is now: NodeHasSufficientMemory
	  Normal   NodeAllocatableEnforced  107s               kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasNoDiskPressure    107s               kubelet          Node test-preload-020721 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     107s               kubelet          Node test-preload-020721 status is now: NodeHasSufficientPID
	  Normal   NodeReady                107s               kubelet          Node test-preload-020721 status is now: NodeReady
	  Normal   Starting                 107s               kubelet          Starting kubelet.
	  Normal   RegisteredNode           103s               node-controller  Node test-preload-020721 event: Registered Node test-preload-020721 in Controller
	  Normal   Starting                 23s                kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  23s (x8 over 23s)  kubelet          Node test-preload-020721 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    23s (x8 over 23s)  kubelet          Node test-preload-020721 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     23s (x7 over 23s)  kubelet          Node test-preload-020721 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  23s                kubelet          Updated Node Allocatable limit across pods
	  Warning  Rebooted                 18s                kubelet          Node test-preload-020721 has been rebooted, boot id: 2d92928f-2405-4129-baa3-39f03d529137
	  Normal   RegisteredNode           15s                node-controller  Node test-preload-020721 event: Registered Node test-preload-020721 in Controller
	
	
	==> dmesg <==
	[Oct14 20:01] Booted with the nomodeset parameter. Only the system framebuffer will be available
	[  +0.000007] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.000049] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +0.001477] (rpcbind)[119]: rpcbind.service: Referenced but unset environment variable evaluates to an empty string: RPCBIND_OPTIONS
	[  +1.023092] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000017] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Oct14 20:02] kauditd_printk_skb: 4 callbacks suppressed
	[  +0.096411] kauditd_printk_skb: 102 callbacks suppressed
	[  +6.477416] kauditd_printk_skb: 177 callbacks suppressed
	[  +5.086286] kauditd_printk_skb: 197 callbacks suppressed
	
	
	==> etcd [9d08e653f7ce44086eb502c5f25493ce0c79c6dd98356252fc22a8e12958e5dd] <==
	{"level":"info","ts":"2025-10-14T20:02:10.870744Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7b6f02fe5f633d8 switched to configuration voters=(555895692539081688)"}
	{"level":"info","ts":"2025-10-14T20:02:10.870883Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"7653764497079f73","local-member-id":"7b6f02fe5f633d8","added-peer-id":"7b6f02fe5f633d8","added-peer-peer-urls":["https://192.168.39.188:2380"]}
	{"level":"info","ts":"2025-10-14T20:02:10.872037Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"7653764497079f73","local-member-id":"7b6f02fe5f633d8","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-14T20:02:10.872087Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-14T20:02:10.877741Z","caller":"embed/etcd.go:729","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-10-14T20:02:10.881305Z","caller":"embed/etcd.go:280","msg":"now serving peer/client/metrics","local-member-id":"7b6f02fe5f633d8","initial-advertise-peer-urls":["https://192.168.39.188:2380"],"listen-peer-urls":["https://192.168.39.188:2380"],"advertise-client-urls":["https://192.168.39.188:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.188:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-10-14T20:02:10.881354Z","caller":"embed/etcd.go:871","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-10-14T20:02:10.881707Z","caller":"embed/etcd.go:600","msg":"serving peer traffic","address":"192.168.39.188:2380"}
	{"level":"info","ts":"2025-10-14T20:02:10.881732Z","caller":"embed/etcd.go:572","msg":"cmux::serve","address":"192.168.39.188:2380"}
	{"level":"info","ts":"2025-10-14T20:02:12.640485Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7b6f02fe5f633d8 is starting a new election at term 2"}
	{"level":"info","ts":"2025-10-14T20:02:12.640535Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7b6f02fe5f633d8 became pre-candidate at term 2"}
	{"level":"info","ts":"2025-10-14T20:02:12.640568Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7b6f02fe5f633d8 received MsgPreVoteResp from 7b6f02fe5f633d8 at term 2"}
	{"level":"info","ts":"2025-10-14T20:02:12.640581Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7b6f02fe5f633d8 became candidate at term 3"}
	{"level":"info","ts":"2025-10-14T20:02:12.640587Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7b6f02fe5f633d8 received MsgVoteResp from 7b6f02fe5f633d8 at term 3"}
	{"level":"info","ts":"2025-10-14T20:02:12.640594Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7b6f02fe5f633d8 became leader at term 3"}
	{"level":"info","ts":"2025-10-14T20:02:12.640602Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 7b6f02fe5f633d8 elected leader 7b6f02fe5f633d8 at term 3"}
	{"level":"info","ts":"2025-10-14T20:02:12.643047Z","caller":"etcdserver/server.go:2140","msg":"published local member to cluster through raft","local-member-id":"7b6f02fe5f633d8","local-member-attributes":"{Name:test-preload-020721 ClientURLs:[https://192.168.39.188:2379]}","request-path":"/0/members/7b6f02fe5f633d8/attributes","cluster-id":"7653764497079f73","publish-timeout":"7s"}
	{"level":"info","ts":"2025-10-14T20:02:12.643190Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-10-14T20:02:12.643339Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-10-14T20:02:12.644079Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-10-14T20:02:12.644222Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-10-14T20:02:12.644239Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-10-14T20:02:12.644694Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-10-14T20:02:12.644718Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-10-14T20:02:12.645261Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.188:2379"}
	
	
	==> kernel <==
	 20:02:31 up 0 min,  0 users,  load average: 0.62, 0.16, 0.05
	Linux test-preload-020721 6.6.95 #1 SMP PREEMPT_DYNAMIC Thu Sep 18 15:48:18 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [8e961ff40e5ef27bab9fdb130c66b23cba23b16c306c85c8778ce7467846a932] <==
	I1014 20:02:13.794389       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I1014 20:02:13.794455       1 aggregator.go:171] initial CRD sync complete...
	I1014 20:02:13.794463       1 autoregister_controller.go:144] Starting autoregister controller
	I1014 20:02:13.794468       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1014 20:02:13.794473       1 cache.go:39] Caches are synced for autoregister controller
	I1014 20:02:13.816492       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I1014 20:02:13.816576       1 policy_source.go:240] refreshing policies
	I1014 20:02:13.836483       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I1014 20:02:13.845510       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1014 20:02:13.845528       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1014 20:02:13.845593       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1014 20:02:13.845625       1 shared_informer.go:320] Caches are synced for configmaps
	I1014 20:02:13.846184       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1014 20:02:13.846484       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1014 20:02:13.846944       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I1014 20:02:13.849956       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1014 20:02:14.231603       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I1014 20:02:14.652991       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1014 20:02:15.413056       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I1014 20:02:15.446939       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I1014 20:02:15.480987       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1014 20:02:15.487391       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1014 20:02:17.234953       1 controller.go:615] quota admission added evaluator for: endpoints
	I1014 20:02:17.335797       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I1014 20:02:17.385822       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [3d6382b6e1eeef77f0f4c73b5eaae291a26652edf9449c9e98e77916f558e056] <==
	I1014 20:02:16.988873       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I1014 20:02:16.989704       1 shared_informer.go:320] Caches are synced for crt configmap
	I1014 20:02:16.991141       1 shared_informer.go:320] Caches are synced for node
	I1014 20:02:16.991710       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1014 20:02:16.991954       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1014 20:02:16.991979       1 shared_informer.go:313] Waiting for caches to sync for cidrallocator
	I1014 20:02:16.992043       1 shared_informer.go:320] Caches are synced for cidrallocator
	I1014 20:02:16.992150       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="test-preload-020721"
	I1014 20:02:16.992327       1 shared_informer.go:320] Caches are synced for cronjob
	I1014 20:02:16.995863       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I1014 20:02:16.998166       1 shared_informer.go:320] Caches are synced for PV protection
	I1014 20:02:16.999722       1 shared_informer.go:320] Caches are synced for PVC protection
	I1014 20:02:17.000992       1 shared_informer.go:320] Caches are synced for resource quota
	I1014 20:02:17.002148       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I1014 20:02:17.007390       1 shared_informer.go:320] Caches are synced for service account
	I1014 20:02:17.015870       1 shared_informer.go:320] Caches are synced for disruption
	I1014 20:02:17.018326       1 shared_informer.go:320] Caches are synced for garbage collector
	I1014 20:02:17.032338       1 shared_informer.go:320] Caches are synced for garbage collector
	I1014 20:02:17.032385       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1014 20:02:17.032393       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1014 20:02:17.345678       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="356.695953ms"
	I1014 20:02:17.345798       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="44.844µs"
	I1014 20:02:18.321192       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="55.338µs"
	I1014 20:02:23.132599       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="15.270637ms"
	I1014 20:02:23.132815       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="123.241µs"
	
	
	==> kube-proxy [4e5eb9439755f4f67e9367b2da21d79b384e27ad5fee1c6de2588d26babc4c27] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1014 20:02:14.851504       1 proxier.go:733] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1014 20:02:14.879497       1 server.go:698] "Successfully retrieved node IP(s)" IPs=["192.168.39.188"]
	E1014 20:02:14.880496       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1014 20:02:14.919254       1 server_linux.go:147] "No iptables support for family" ipFamily="IPv6"
	I1014 20:02:14.919283       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1014 20:02:14.919306       1 server_linux.go:170] "Using iptables Proxier"
	I1014 20:02:14.922119       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1014 20:02:14.922415       1 server.go:497] "Version info" version="v1.32.0"
	I1014 20:02:14.922666       1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1014 20:02:14.924603       1 config.go:199] "Starting service config controller"
	I1014 20:02:14.924661       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1014 20:02:14.924701       1 config.go:105] "Starting endpoint slice config controller"
	I1014 20:02:14.924717       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1014 20:02:14.925301       1 config.go:329] "Starting node config controller"
	I1014 20:02:14.925342       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1014 20:02:15.025534       1 shared_informer.go:320] Caches are synced for service config
	I1014 20:02:15.025564       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1014 20:02:15.025888       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [ca5e1dec955879a80bda64a6ff8eae16ac8478f341c59c23faa2d983f8818d46] <==
	I1014 20:02:11.552332       1 serving.go:386] Generated self-signed cert in-memory
	W1014 20:02:13.723220       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1014 20:02:13.723255       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1014 20:02:13.723265       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1014 20:02:13.723274       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1014 20:02:13.765895       1 server.go:166] "Starting Kubernetes Scheduler" version="v1.32.0"
	I1014 20:02:13.765975       1 server.go:168] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1014 20:02:13.771177       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1014 20:02:13.771255       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1014 20:02:13.771291       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I1014 20:02:13.771393       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1014 20:02:13.871828       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Oct 14 20:02:13 test-preload-020721 kubelet[1151]: I1014 20:02:13.890944    1151 kubelet_node_status.go:79] "Successfully registered node" node="test-preload-020721"
	Oct 14 20:02:13 test-preload-020721 kubelet[1151]: I1014 20:02:13.890969    1151 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Oct 14 20:02:13 test-preload-020721 kubelet[1151]: I1014 20:02:13.892219    1151 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Oct 14 20:02:13 test-preload-020721 kubelet[1151]: I1014 20:02:13.893093    1151 setters.go:602] "Node became not ready" node="test-preload-020721" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-14T20:02:13Z","lastTransitionTime":"2025-10-14T20:02:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?"}
	Oct 14 20:02:13 test-preload-020721 kubelet[1151]: E1014 20:02:13.903120    1151 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-test-preload-020721\" already exists" pod="kube-system/kube-apiserver-test-preload-020721"
	Oct 14 20:02:13 test-preload-020721 kubelet[1151]: I1014 20:02:13.903146    1151 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-test-preload-020721"
	Oct 14 20:02:13 test-preload-020721 kubelet[1151]: E1014 20:02:13.916756    1151 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-controller-manager-test-preload-020721\" already exists" pod="kube-system/kube-controller-manager-test-preload-020721"
	Oct 14 20:02:14 test-preload-020721 kubelet[1151]: I1014 20:02:14.129560    1151 apiserver.go:52] "Watching apiserver"
	Oct 14 20:02:14 test-preload-020721 kubelet[1151]: E1014 20:02:14.139370    1151 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-668d6bf9bc-82859" podUID="debb753d-345c-49da-bf8e-0a0d1fba55ea"
	Oct 14 20:02:14 test-preload-020721 kubelet[1151]: I1014 20:02:14.152749    1151 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
	Oct 14 20:02:14 test-preload-020721 kubelet[1151]: I1014 20:02:14.222189    1151 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/79c3a517-6781-4979-b655-38762802ef65-xtables-lock\") pod \"kube-proxy-pswdv\" (UID: \"79c3a517-6781-4979-b655-38762802ef65\") " pod="kube-system/kube-proxy-pswdv"
	Oct 14 20:02:14 test-preload-020721 kubelet[1151]: I1014 20:02:14.222246    1151 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/79c3a517-6781-4979-b655-38762802ef65-lib-modules\") pod \"kube-proxy-pswdv\" (UID: \"79c3a517-6781-4979-b655-38762802ef65\") " pod="kube-system/kube-proxy-pswdv"
	Oct 14 20:02:14 test-preload-020721 kubelet[1151]: I1014 20:02:14.222279    1151 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/17881ef8-6db2-46bf-b883-f3cbb34053a4-tmp\") pod \"storage-provisioner\" (UID: \"17881ef8-6db2-46bf-b883-f3cbb34053a4\") " pod="kube-system/storage-provisioner"
	Oct 14 20:02:14 test-preload-020721 kubelet[1151]: E1014 20:02:14.222715    1151 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Oct 14 20:02:14 test-preload-020721 kubelet[1151]: E1014 20:02:14.222798    1151 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/debb753d-345c-49da-bf8e-0a0d1fba55ea-config-volume podName:debb753d-345c-49da-bf8e-0a0d1fba55ea nodeName:}" failed. No retries permitted until 2025-10-14 20:02:14.72277741 +0000 UTC m=+6.693419193 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/debb753d-345c-49da-bf8e-0a0d1fba55ea-config-volume") pod "coredns-668d6bf9bc-82859" (UID: "debb753d-345c-49da-bf8e-0a0d1fba55ea") : object "kube-system"/"coredns" not registered
	Oct 14 20:02:14 test-preload-020721 kubelet[1151]: E1014 20:02:14.726277    1151 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Oct 14 20:02:14 test-preload-020721 kubelet[1151]: E1014 20:02:14.726478    1151 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/debb753d-345c-49da-bf8e-0a0d1fba55ea-config-volume podName:debb753d-345c-49da-bf8e-0a0d1fba55ea nodeName:}" failed. No retries permitted until 2025-10-14 20:02:15.726376159 +0000 UTC m=+7.697017941 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/debb753d-345c-49da-bf8e-0a0d1fba55ea-config-volume") pod "coredns-668d6bf9bc-82859" (UID: "debb753d-345c-49da-bf8e-0a0d1fba55ea") : object "kube-system"/"coredns" not registered
	Oct 14 20:02:15 test-preload-020721 kubelet[1151]: I1014 20:02:15.577203    1151 kubelet_node_status.go:502] "Fast updating node status as it just became ready"
	Oct 14 20:02:15 test-preload-020721 kubelet[1151]: E1014 20:02:15.734876    1151 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Oct 14 20:02:15 test-preload-020721 kubelet[1151]: E1014 20:02:15.734967    1151 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/debb753d-345c-49da-bf8e-0a0d1fba55ea-config-volume podName:debb753d-345c-49da-bf8e-0a0d1fba55ea nodeName:}" failed. No retries permitted until 2025-10-14 20:02:17.734952347 +0000 UTC m=+9.705594141 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/debb753d-345c-49da-bf8e-0a0d1fba55ea-config-volume") pod "coredns-668d6bf9bc-82859" (UID: "debb753d-345c-49da-bf8e-0a0d1fba55ea") : object "kube-system"/"coredns" not registered
	Oct 14 20:02:18 test-preload-020721 kubelet[1151]: E1014 20:02:18.233690    1151 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1760472138233300945,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 14 20:02:18 test-preload-020721 kubelet[1151]: E1014 20:02:18.233732    1151 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1760472138233300945,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 14 20:02:23 test-preload-020721 kubelet[1151]: I1014 20:02:23.102538    1151 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Oct 14 20:02:28 test-preload-020721 kubelet[1151]: E1014 20:02:28.236528    1151 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1760472148235292169,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 14 20:02:28 test-preload-020721 kubelet[1151]: E1014 20:02:28.236551    1151 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1760472148235292169,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [7e86a92e6b9e69c35b61445f32b994a33f1f5efc5a7d36ac13f1c53211d4e4cc] <==
	I1014 20:02:14.659792       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p test-preload-020721 -n test-preload-020721
helpers_test.go:269: (dbg) Run:  kubectl --context test-preload-020721 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestPreload FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:175: Cleaning up "test-preload-020721" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-020721
--- FAIL: TestPreload (159.27s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (75.81s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-488160 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
E1014 20:06:22.790932  368634 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/functional-416610/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-488160 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m11.395091654s)
pause_test.go:100: expected the second start log output to include "The running cluster does not require reconfiguration" but got: 
-- stdout --
	* [pause-488160] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21409
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21409-364627/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21409-364627/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	* Starting "pause-488160" primary control-plane node in "pause-488160" cluster
	* Preparing Kubernetes v1.34.1 on CRI-O 1.29.1 ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Enabled addons: 
	* Verifying Kubernetes components...
	* Done! kubectl is now configured to use "pause-488160" cluster and "default" namespace by default

                                                
                                                
-- /stdout --
** stderr ** 
	I1014 20:06:07.706487  402335 out.go:360] Setting OutFile to fd 1 ...
	I1014 20:06:07.706627  402335 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1014 20:06:07.706638  402335 out.go:374] Setting ErrFile to fd 2...
	I1014 20:06:07.706645  402335 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1014 20:06:07.706932  402335 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-364627/.minikube/bin
	I1014 20:06:07.707528  402335 out.go:368] Setting JSON to false
	I1014 20:06:07.708858  402335 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":6511,"bootTime":1760465857,"procs":203,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1014 20:06:07.708960  402335 start.go:141] virtualization: kvm guest
	I1014 20:06:07.710963  402335 out.go:179] * [pause-488160] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1014 20:06:07.712576  402335 out.go:179]   - MINIKUBE_LOCATION=21409
	I1014 20:06:07.712569  402335 notify.go:220] Checking for updates...
	I1014 20:06:07.715031  402335 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1014 20:06:07.716566  402335 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21409-364627/kubeconfig
	I1014 20:06:07.718070  402335 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21409-364627/.minikube
	I1014 20:06:07.719349  402335 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1014 20:06:07.720859  402335 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1014 20:06:07.722793  402335 config.go:182] Loaded profile config "pause-488160": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1014 20:06:07.723449  402335 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 20:06:07.723519  402335 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 20:06:07.742742  402335 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38193
	I1014 20:06:07.743543  402335 main.go:141] libmachine: () Calling .GetVersion
	I1014 20:06:07.744240  402335 main.go:141] libmachine: Using API Version  1
	I1014 20:06:07.744275  402335 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 20:06:07.744793  402335 main.go:141] libmachine: () Calling .GetMachineName
	I1014 20:06:07.745016  402335 main.go:141] libmachine: (pause-488160) Calling .DriverName
	I1014 20:06:07.745402  402335 driver.go:421] Setting default libvirt URI to qemu:///system
	I1014 20:06:07.745907  402335 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 20:06:07.745958  402335 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 20:06:07.760752  402335 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38957
	I1014 20:06:07.761353  402335 main.go:141] libmachine: () Calling .GetVersion
	I1014 20:06:07.761934  402335 main.go:141] libmachine: Using API Version  1
	I1014 20:06:07.761962  402335 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 20:06:07.762524  402335 main.go:141] libmachine: () Calling .GetMachineName
	I1014 20:06:07.762805  402335 main.go:141] libmachine: (pause-488160) Calling .DriverName
	I1014 20:06:08.612254  402335 out.go:179] * Using the kvm2 driver based on existing profile
	I1014 20:06:08.613600  402335 start.go:305] selected driver: kvm2
	I1014 20:06:08.613619  402335 start.go:925] validating driver "kvm2" against &{Name:pause-488160 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernetes
Version:v1.34.1 ClusterName:pause-488160 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.36 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-instal
ler:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1014 20:06:08.613764  402335 start.go:936] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1014 20:06:08.614137  402335 install.go:66] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1014 20:06:08.614249  402335 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21409-364627/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1014 20:06:08.630337  402335 install.go:163] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I1014 20:06:08.630382  402335 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21409-364627/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1014 20:06:08.651168  402335 install.go:163] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I1014 20:06:08.652223  402335 cni.go:84] Creating CNI manager for ""
	I1014 20:06:08.652302  402335 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1014 20:06:08.652418  402335 start.go:349] cluster config:
	{Name:pause-488160 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-488160 Namespace:default APIServerHAVIP: API
ServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.36 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false p
ortainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1014 20:06:08.652644  402335 iso.go:125] acquiring lock: {Name:mk340b9c7aeea9c9c1eaad0b6560c7e40df5ab00 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1014 20:06:08.654970  402335 out.go:179] * Starting "pause-488160" primary control-plane node in "pause-488160" cluster
	I1014 20:06:08.656234  402335 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1014 20:06:08.656286  402335 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21409-364627/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1014 20:06:08.656295  402335 cache.go:58] Caching tarball of preloaded images
	I1014 20:06:08.656417  402335 preload.go:233] Found /home/jenkins/minikube-integration/21409-364627/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1014 20:06:08.656430  402335 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1014 20:06:08.656548  402335 profile.go:143] Saving config to /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/pause-488160/config.json ...
	I1014 20:06:08.656791  402335 start.go:360] acquireMachinesLock for pause-488160: {Name:mk52d449be3ec71c122454fdb0aeda759b1051fc Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1014 20:06:34.504066  402335 start.go:364] duration metric: took 25.847222141s to acquireMachinesLock for "pause-488160"
	I1014 20:06:34.504118  402335 start.go:96] Skipping create...Using existing machine configuration
	I1014 20:06:34.504131  402335 fix.go:54] fixHost starting: 
	I1014 20:06:34.504631  402335 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 20:06:34.504696  402335 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 20:06:34.522299  402335 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32781
	I1014 20:06:34.522904  402335 main.go:141] libmachine: () Calling .GetVersion
	I1014 20:06:34.523593  402335 main.go:141] libmachine: Using API Version  1
	I1014 20:06:34.523622  402335 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 20:06:34.524048  402335 main.go:141] libmachine: () Calling .GetMachineName
	I1014 20:06:34.524279  402335 main.go:141] libmachine: (pause-488160) Calling .DriverName
	I1014 20:06:34.524471  402335 main.go:141] libmachine: (pause-488160) Calling .GetState
	I1014 20:06:34.527002  402335 fix.go:112] recreateIfNeeded on pause-488160: state=Running err=<nil>
	W1014 20:06:34.527024  402335 fix.go:138] unexpected machine state, will restart: <nil>
	I1014 20:06:34.531483  402335 out.go:252] * Updating the running kvm2 "pause-488160" VM ...
	I1014 20:06:34.531528  402335 machine.go:93] provisionDockerMachine start ...
	I1014 20:06:34.531582  402335 main.go:141] libmachine: (pause-488160) Calling .DriverName
	I1014 20:06:34.531907  402335 main.go:141] libmachine: (pause-488160) Calling .GetSSHHostname
	I1014 20:06:34.535380  402335 main.go:141] libmachine: (pause-488160) DBG | domain pause-488160 has defined MAC address 52:54:00:13:50:45 in network mk-pause-488160
	I1014 20:06:34.535926  402335 main.go:141] libmachine: (pause-488160) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:50:45", ip: ""} in network mk-pause-488160: {Iface:virbr2 ExpiryTime:2025-10-14 21:04:58 +0000 UTC Type:0 Mac:52:54:00:13:50:45 Iaid: IPaddr:192.168.50.36 Prefix:24 Hostname:pause-488160 Clientid:01:52:54:00:13:50:45}
	I1014 20:06:34.535955  402335 main.go:141] libmachine: (pause-488160) DBG | domain pause-488160 has defined IP address 192.168.50.36 and MAC address 52:54:00:13:50:45 in network mk-pause-488160
	I1014 20:06:34.536202  402335 main.go:141] libmachine: (pause-488160) Calling .GetSSHPort
	I1014 20:06:34.536453  402335 main.go:141] libmachine: (pause-488160) Calling .GetSSHKeyPath
	I1014 20:06:34.536672  402335 main.go:141] libmachine: (pause-488160) Calling .GetSSHKeyPath
	I1014 20:06:34.536823  402335 main.go:141] libmachine: (pause-488160) Calling .GetSSHUsername
	I1014 20:06:34.537013  402335 main.go:141] libmachine: Using SSH client type: native
	I1014 20:06:34.537292  402335 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.50.36 22 <nil> <nil>}
	I1014 20:06:34.537306  402335 main.go:141] libmachine: About to run SSH command:
	hostname
	I1014 20:06:34.646200  402335 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-488160
	
	I1014 20:06:34.646255  402335 main.go:141] libmachine: (pause-488160) Calling .GetMachineName
	I1014 20:06:34.646603  402335 buildroot.go:166] provisioning hostname "pause-488160"
	I1014 20:06:34.646632  402335 main.go:141] libmachine: (pause-488160) Calling .GetMachineName
	I1014 20:06:34.646862  402335 main.go:141] libmachine: (pause-488160) Calling .GetSSHHostname
	I1014 20:06:34.650279  402335 main.go:141] libmachine: (pause-488160) DBG | domain pause-488160 has defined MAC address 52:54:00:13:50:45 in network mk-pause-488160
	I1014 20:06:34.650821  402335 main.go:141] libmachine: (pause-488160) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:50:45", ip: ""} in network mk-pause-488160: {Iface:virbr2 ExpiryTime:2025-10-14 21:04:58 +0000 UTC Type:0 Mac:52:54:00:13:50:45 Iaid: IPaddr:192.168.50.36 Prefix:24 Hostname:pause-488160 Clientid:01:52:54:00:13:50:45}
	I1014 20:06:34.650849  402335 main.go:141] libmachine: (pause-488160) DBG | domain pause-488160 has defined IP address 192.168.50.36 and MAC address 52:54:00:13:50:45 in network mk-pause-488160
	I1014 20:06:34.651002  402335 main.go:141] libmachine: (pause-488160) Calling .GetSSHPort
	I1014 20:06:34.651223  402335 main.go:141] libmachine: (pause-488160) Calling .GetSSHKeyPath
	I1014 20:06:34.651424  402335 main.go:141] libmachine: (pause-488160) Calling .GetSSHKeyPath
	I1014 20:06:34.651640  402335 main.go:141] libmachine: (pause-488160) Calling .GetSSHUsername
	I1014 20:06:34.651870  402335 main.go:141] libmachine: Using SSH client type: native
	I1014 20:06:34.652111  402335 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.50.36 22 <nil> <nil>}
	I1014 20:06:34.652132  402335 main.go:141] libmachine: About to run SSH command:
	sudo hostname pause-488160 && echo "pause-488160" | sudo tee /etc/hostname
	I1014 20:06:34.784375  402335 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-488160
	
	I1014 20:06:34.784408  402335 main.go:141] libmachine: (pause-488160) Calling .GetSSHHostname
	I1014 20:06:34.788077  402335 main.go:141] libmachine: (pause-488160) DBG | domain pause-488160 has defined MAC address 52:54:00:13:50:45 in network mk-pause-488160
	I1014 20:06:34.788618  402335 main.go:141] libmachine: (pause-488160) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:50:45", ip: ""} in network mk-pause-488160: {Iface:virbr2 ExpiryTime:2025-10-14 21:04:58 +0000 UTC Type:0 Mac:52:54:00:13:50:45 Iaid: IPaddr:192.168.50.36 Prefix:24 Hostname:pause-488160 Clientid:01:52:54:00:13:50:45}
	I1014 20:06:34.788648  402335 main.go:141] libmachine: (pause-488160) DBG | domain pause-488160 has defined IP address 192.168.50.36 and MAC address 52:54:00:13:50:45 in network mk-pause-488160
	I1014 20:06:34.788969  402335 main.go:141] libmachine: (pause-488160) Calling .GetSSHPort
	I1014 20:06:34.789160  402335 main.go:141] libmachine: (pause-488160) Calling .GetSSHKeyPath
	I1014 20:06:34.789346  402335 main.go:141] libmachine: (pause-488160) Calling .GetSSHKeyPath
	I1014 20:06:34.789529  402335 main.go:141] libmachine: (pause-488160) Calling .GetSSHUsername
	I1014 20:06:34.789733  402335 main.go:141] libmachine: Using SSH client type: native
	I1014 20:06:34.790039  402335 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.50.36 22 <nil> <nil>}
	I1014 20:06:34.790070  402335 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-488160' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-488160/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-488160' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1014 20:06:34.903242  402335 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1014 20:06:34.903277  402335 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21409-364627/.minikube CaCertPath:/home/jenkins/minikube-integration/21409-364627/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21409-364627/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21409-364627/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21409-364627/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21409-364627/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21409-364627/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21409-364627/.minikube}
	I1014 20:06:34.903332  402335 buildroot.go:174] setting up certificates
	I1014 20:06:34.903360  402335 provision.go:84] configureAuth start
	I1014 20:06:34.903376  402335 main.go:141] libmachine: (pause-488160) Calling .GetMachineName
	I1014 20:06:34.903731  402335 main.go:141] libmachine: (pause-488160) Calling .GetIP
	I1014 20:06:34.907381  402335 main.go:141] libmachine: (pause-488160) DBG | domain pause-488160 has defined MAC address 52:54:00:13:50:45 in network mk-pause-488160
	I1014 20:06:34.907814  402335 main.go:141] libmachine: (pause-488160) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:50:45", ip: ""} in network mk-pause-488160: {Iface:virbr2 ExpiryTime:2025-10-14 21:04:58 +0000 UTC Type:0 Mac:52:54:00:13:50:45 Iaid: IPaddr:192.168.50.36 Prefix:24 Hostname:pause-488160 Clientid:01:52:54:00:13:50:45}
	I1014 20:06:34.907853  402335 main.go:141] libmachine: (pause-488160) DBG | domain pause-488160 has defined IP address 192.168.50.36 and MAC address 52:54:00:13:50:45 in network mk-pause-488160
	I1014 20:06:34.908041  402335 main.go:141] libmachine: (pause-488160) Calling .GetSSHHostname
	I1014 20:06:34.911430  402335 main.go:141] libmachine: (pause-488160) DBG | domain pause-488160 has defined MAC address 52:54:00:13:50:45 in network mk-pause-488160
	I1014 20:06:34.911973  402335 main.go:141] libmachine: (pause-488160) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:50:45", ip: ""} in network mk-pause-488160: {Iface:virbr2 ExpiryTime:2025-10-14 21:04:58 +0000 UTC Type:0 Mac:52:54:00:13:50:45 Iaid: IPaddr:192.168.50.36 Prefix:24 Hostname:pause-488160 Clientid:01:52:54:00:13:50:45}
	I1014 20:06:34.912005  402335 main.go:141] libmachine: (pause-488160) DBG | domain pause-488160 has defined IP address 192.168.50.36 and MAC address 52:54:00:13:50:45 in network mk-pause-488160
	I1014 20:06:34.912164  402335 provision.go:143] copyHostCerts
	I1014 20:06:34.912226  402335 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-364627/.minikube/key.pem, removing ...
	I1014 20:06:34.912248  402335 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-364627/.minikube/key.pem
	I1014 20:06:34.912343  402335 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-364627/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21409-364627/.minikube/key.pem (1675 bytes)
	I1014 20:06:34.912462  402335 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-364627/.minikube/ca.pem, removing ...
	I1014 20:06:34.912471  402335 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-364627/.minikube/ca.pem
	I1014 20:06:34.912495  402335 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-364627/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21409-364627/.minikube/ca.pem (1082 bytes)
	I1014 20:06:34.912552  402335 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-364627/.minikube/cert.pem, removing ...
	I1014 20:06:34.912559  402335 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-364627/.minikube/cert.pem
	I1014 20:06:34.912577  402335 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-364627/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21409-364627/.minikube/cert.pem (1123 bytes)
	I1014 20:06:34.912629  402335 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21409-364627/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21409-364627/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21409-364627/.minikube/certs/ca-key.pem org=jenkins.pause-488160 san=[127.0.0.1 192.168.50.36 localhost minikube pause-488160]
	I1014 20:06:35.160143  402335 provision.go:177] copyRemoteCerts
	I1014 20:06:35.160195  402335 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1014 20:06:35.160218  402335 main.go:141] libmachine: (pause-488160) Calling .GetSSHHostname
	I1014 20:06:35.163425  402335 main.go:141] libmachine: (pause-488160) DBG | domain pause-488160 has defined MAC address 52:54:00:13:50:45 in network mk-pause-488160
	I1014 20:06:35.163882  402335 main.go:141] libmachine: (pause-488160) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:50:45", ip: ""} in network mk-pause-488160: {Iface:virbr2 ExpiryTime:2025-10-14 21:04:58 +0000 UTC Type:0 Mac:52:54:00:13:50:45 Iaid: IPaddr:192.168.50.36 Prefix:24 Hostname:pause-488160 Clientid:01:52:54:00:13:50:45}
	I1014 20:06:35.163912  402335 main.go:141] libmachine: (pause-488160) DBG | domain pause-488160 has defined IP address 192.168.50.36 and MAC address 52:54:00:13:50:45 in network mk-pause-488160
	I1014 20:06:35.164137  402335 main.go:141] libmachine: (pause-488160) Calling .GetSSHPort
	I1014 20:06:35.164380  402335 main.go:141] libmachine: (pause-488160) Calling .GetSSHKeyPath
	I1014 20:06:35.164571  402335 main.go:141] libmachine: (pause-488160) Calling .GetSSHUsername
	I1014 20:06:35.164719  402335 sshutil.go:53] new ssh client: &{IP:192.168.50.36 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21409-364627/.minikube/machines/pause-488160/id_rsa Username:docker}
	I1014 20:06:35.256904  402335 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-364627/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1014 20:06:35.300083  402335 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-364627/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1014 20:06:35.339269  402335 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-364627/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1014 20:06:35.382439  402335 provision.go:87] duration metric: took 479.060397ms to configureAuth
	I1014 20:06:35.382472  402335 buildroot.go:189] setting minikube options for container-runtime
	I1014 20:06:35.382726  402335 config.go:182] Loaded profile config "pause-488160": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1014 20:06:35.382828  402335 main.go:141] libmachine: (pause-488160) Calling .GetSSHHostname
	I1014 20:06:35.386785  402335 main.go:141] libmachine: (pause-488160) DBG | domain pause-488160 has defined MAC address 52:54:00:13:50:45 in network mk-pause-488160
	I1014 20:06:35.387332  402335 main.go:141] libmachine: (pause-488160) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:50:45", ip: ""} in network mk-pause-488160: {Iface:virbr2 ExpiryTime:2025-10-14 21:04:58 +0000 UTC Type:0 Mac:52:54:00:13:50:45 Iaid: IPaddr:192.168.50.36 Prefix:24 Hostname:pause-488160 Clientid:01:52:54:00:13:50:45}
	I1014 20:06:35.387372  402335 main.go:141] libmachine: (pause-488160) DBG | domain pause-488160 has defined IP address 192.168.50.36 and MAC address 52:54:00:13:50:45 in network mk-pause-488160
	I1014 20:06:35.387520  402335 main.go:141] libmachine: (pause-488160) Calling .GetSSHPort
	I1014 20:06:35.387743  402335 main.go:141] libmachine: (pause-488160) Calling .GetSSHKeyPath
	I1014 20:06:35.387913  402335 main.go:141] libmachine: (pause-488160) Calling .GetSSHKeyPath
	I1014 20:06:35.388082  402335 main.go:141] libmachine: (pause-488160) Calling .GetSSHUsername
	I1014 20:06:35.388344  402335 main.go:141] libmachine: Using SSH client type: native
	I1014 20:06:35.388673  402335 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.50.36 22 <nil> <nil>}
	I1014 20:06:35.388714  402335 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1014 20:06:40.935467  402335 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1014 20:06:40.935501  402335 machine.go:96] duration metric: took 6.40396361s to provisionDockerMachine
	I1014 20:06:40.935514  402335 start.go:293] postStartSetup for "pause-488160" (driver="kvm2")
	I1014 20:06:40.935525  402335 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1014 20:06:40.935563  402335 main.go:141] libmachine: (pause-488160) Calling .DriverName
	I1014 20:06:40.935945  402335 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1014 20:06:40.935981  402335 main.go:141] libmachine: (pause-488160) Calling .GetSSHHostname
	I1014 20:06:40.939450  402335 main.go:141] libmachine: (pause-488160) DBG | domain pause-488160 has defined MAC address 52:54:00:13:50:45 in network mk-pause-488160
	I1014 20:06:40.939859  402335 main.go:141] libmachine: (pause-488160) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:50:45", ip: ""} in network mk-pause-488160: {Iface:virbr2 ExpiryTime:2025-10-14 21:04:58 +0000 UTC Type:0 Mac:52:54:00:13:50:45 Iaid: IPaddr:192.168.50.36 Prefix:24 Hostname:pause-488160 Clientid:01:52:54:00:13:50:45}
	I1014 20:06:40.939889  402335 main.go:141] libmachine: (pause-488160) DBG | domain pause-488160 has defined IP address 192.168.50.36 and MAC address 52:54:00:13:50:45 in network mk-pause-488160
	I1014 20:06:40.940040  402335 main.go:141] libmachine: (pause-488160) Calling .GetSSHPort
	I1014 20:06:40.940248  402335 main.go:141] libmachine: (pause-488160) Calling .GetSSHKeyPath
	I1014 20:06:40.940414  402335 main.go:141] libmachine: (pause-488160) Calling .GetSSHUsername
	I1014 20:06:40.940575  402335 sshutil.go:53] new ssh client: &{IP:192.168.50.36 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21409-364627/.minikube/machines/pause-488160/id_rsa Username:docker}
	I1014 20:06:41.020935  402335 ssh_runner.go:195] Run: cat /etc/os-release
	I1014 20:06:41.026173  402335 info.go:137] Remote host: Buildroot 2025.02
	I1014 20:06:41.026207  402335 filesync.go:126] Scanning /home/jenkins/minikube-integration/21409-364627/.minikube/addons for local assets ...
	I1014 20:06:41.026272  402335 filesync.go:126] Scanning /home/jenkins/minikube-integration/21409-364627/.minikube/files for local assets ...
	I1014 20:06:41.026384  402335 filesync.go:149] local asset: /home/jenkins/minikube-integration/21409-364627/.minikube/files/etc/ssl/certs/3686342.pem -> 3686342.pem in /etc/ssl/certs
	I1014 20:06:41.026486  402335 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1014 20:06:41.038632  402335 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-364627/.minikube/files/etc/ssl/certs/3686342.pem --> /etc/ssl/certs/3686342.pem (1708 bytes)
	I1014 20:06:41.068158  402335 start.go:296] duration metric: took 132.628537ms for postStartSetup
	I1014 20:06:41.068201  402335 fix.go:56] duration metric: took 6.564071357s for fixHost
	I1014 20:06:41.068223  402335 main.go:141] libmachine: (pause-488160) Calling .GetSSHHostname
	I1014 20:06:41.071181  402335 main.go:141] libmachine: (pause-488160) DBG | domain pause-488160 has defined MAC address 52:54:00:13:50:45 in network mk-pause-488160
	I1014 20:06:41.071655  402335 main.go:141] libmachine: (pause-488160) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:50:45", ip: ""} in network mk-pause-488160: {Iface:virbr2 ExpiryTime:2025-10-14 21:04:58 +0000 UTC Type:0 Mac:52:54:00:13:50:45 Iaid: IPaddr:192.168.50.36 Prefix:24 Hostname:pause-488160 Clientid:01:52:54:00:13:50:45}
	I1014 20:06:41.071690  402335 main.go:141] libmachine: (pause-488160) DBG | domain pause-488160 has defined IP address 192.168.50.36 and MAC address 52:54:00:13:50:45 in network mk-pause-488160
	I1014 20:06:41.071958  402335 main.go:141] libmachine: (pause-488160) Calling .GetSSHPort
	I1014 20:06:41.072172  402335 main.go:141] libmachine: (pause-488160) Calling .GetSSHKeyPath
	I1014 20:06:41.072399  402335 main.go:141] libmachine: (pause-488160) Calling .GetSSHKeyPath
	I1014 20:06:41.072574  402335 main.go:141] libmachine: (pause-488160) Calling .GetSSHUsername
	I1014 20:06:41.072749  402335 main.go:141] libmachine: Using SSH client type: native
	I1014 20:06:41.072942  402335 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.50.36 22 <nil> <nil>}
	I1014 20:06:41.072952  402335 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1014 20:06:41.179831  402335 main.go:141] libmachine: SSH cmd err, output: <nil>: 1760472401.175485926
	
	I1014 20:06:41.179864  402335 fix.go:216] guest clock: 1760472401.175485926
	I1014 20:06:41.179875  402335 fix.go:229] Guest: 2025-10-14 20:06:41.175485926 +0000 UTC Remote: 2025-10-14 20:06:41.068205476 +0000 UTC m=+33.416032611 (delta=107.28045ms)
	I1014 20:06:41.179901  402335 fix.go:200] guest clock delta is within tolerance: 107.28045ms
	I1014 20:06:41.179907  402335 start.go:83] releasing machines lock for "pause-488160", held for 6.675811348s
	I1014 20:06:41.179935  402335 main.go:141] libmachine: (pause-488160) Calling .DriverName
	I1014 20:06:41.180253  402335 main.go:141] libmachine: (pause-488160) Calling .GetIP
	I1014 20:06:41.184106  402335 main.go:141] libmachine: (pause-488160) DBG | domain pause-488160 has defined MAC address 52:54:00:13:50:45 in network mk-pause-488160
	I1014 20:06:41.184603  402335 main.go:141] libmachine: (pause-488160) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:50:45", ip: ""} in network mk-pause-488160: {Iface:virbr2 ExpiryTime:2025-10-14 21:04:58 +0000 UTC Type:0 Mac:52:54:00:13:50:45 Iaid: IPaddr:192.168.50.36 Prefix:24 Hostname:pause-488160 Clientid:01:52:54:00:13:50:45}
	I1014 20:06:41.184634  402335 main.go:141] libmachine: (pause-488160) DBG | domain pause-488160 has defined IP address 192.168.50.36 and MAC address 52:54:00:13:50:45 in network mk-pause-488160
	I1014 20:06:41.184857  402335 main.go:141] libmachine: (pause-488160) Calling .DriverName
	I1014 20:06:41.185506  402335 main.go:141] libmachine: (pause-488160) Calling .DriverName
	I1014 20:06:41.185714  402335 main.go:141] libmachine: (pause-488160) Calling .DriverName
	I1014 20:06:41.185836  402335 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1014 20:06:41.185877  402335 main.go:141] libmachine: (pause-488160) Calling .GetSSHHostname
	I1014 20:06:41.185974  402335 ssh_runner.go:195] Run: cat /version.json
	I1014 20:06:41.186003  402335 main.go:141] libmachine: (pause-488160) Calling .GetSSHHostname
	I1014 20:06:41.189242  402335 main.go:141] libmachine: (pause-488160) DBG | domain pause-488160 has defined MAC address 52:54:00:13:50:45 in network mk-pause-488160
	I1014 20:06:41.189263  402335 main.go:141] libmachine: (pause-488160) DBG | domain pause-488160 has defined MAC address 52:54:00:13:50:45 in network mk-pause-488160
	I1014 20:06:41.189739  402335 main.go:141] libmachine: (pause-488160) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:50:45", ip: ""} in network mk-pause-488160: {Iface:virbr2 ExpiryTime:2025-10-14 21:04:58 +0000 UTC Type:0 Mac:52:54:00:13:50:45 Iaid: IPaddr:192.168.50.36 Prefix:24 Hostname:pause-488160 Clientid:01:52:54:00:13:50:45}
	I1014 20:06:41.189799  402335 main.go:141] libmachine: (pause-488160) DBG | domain pause-488160 has defined IP address 192.168.50.36 and MAC address 52:54:00:13:50:45 in network mk-pause-488160
	I1014 20:06:41.189825  402335 main.go:141] libmachine: (pause-488160) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:50:45", ip: ""} in network mk-pause-488160: {Iface:virbr2 ExpiryTime:2025-10-14 21:04:58 +0000 UTC Type:0 Mac:52:54:00:13:50:45 Iaid: IPaddr:192.168.50.36 Prefix:24 Hostname:pause-488160 Clientid:01:52:54:00:13:50:45}
	I1014 20:06:41.189867  402335 main.go:141] libmachine: (pause-488160) DBG | domain pause-488160 has defined IP address 192.168.50.36 and MAC address 52:54:00:13:50:45 in network mk-pause-488160
	I1014 20:06:41.190034  402335 main.go:141] libmachine: (pause-488160) Calling .GetSSHPort
	I1014 20:06:41.190234  402335 main.go:141] libmachine: (pause-488160) Calling .GetSSHPort
	I1014 20:06:41.190238  402335 main.go:141] libmachine: (pause-488160) Calling .GetSSHKeyPath
	I1014 20:06:41.190455  402335 main.go:141] libmachine: (pause-488160) Calling .GetSSHUsername
	I1014 20:06:41.190487  402335 main.go:141] libmachine: (pause-488160) Calling .GetSSHKeyPath
	I1014 20:06:41.190648  402335 main.go:141] libmachine: (pause-488160) Calling .GetSSHUsername
	I1014 20:06:41.190646  402335 sshutil.go:53] new ssh client: &{IP:192.168.50.36 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21409-364627/.minikube/machines/pause-488160/id_rsa Username:docker}
	I1014 20:06:41.190778  402335 sshutil.go:53] new ssh client: &{IP:192.168.50.36 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21409-364627/.minikube/machines/pause-488160/id_rsa Username:docker}
	I1014 20:06:41.321128  402335 ssh_runner.go:195] Run: systemctl --version
	I1014 20:06:41.328336  402335 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1014 20:06:41.494700  402335 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1014 20:06:41.505083  402335 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1014 20:06:41.505164  402335 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1014 20:06:41.520284  402335 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1014 20:06:41.520342  402335 start.go:495] detecting cgroup driver to use...
	I1014 20:06:41.520447  402335 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1014 20:06:41.547410  402335 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1014 20:06:41.573187  402335 docker.go:218] disabling cri-docker service (if available) ...
	I1014 20:06:41.573258  402335 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1014 20:06:41.602344  402335 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1014 20:06:41.620929  402335 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1014 20:06:41.811567  402335 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1014 20:06:41.989591  402335 docker.go:234] disabling docker service ...
	I1014 20:06:41.989677  402335 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1014 20:06:42.018680  402335 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1014 20:06:42.035867  402335 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1014 20:06:42.224540  402335 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1014 20:06:42.423079  402335 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1014 20:06:42.444884  402335 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1014 20:06:42.482295  402335 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1014 20:06:42.482381  402335 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 20:06:42.501288  402335 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1014 20:06:42.501403  402335 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 20:06:42.519475  402335 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 20:06:42.533343  402335 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 20:06:42.547575  402335 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1014 20:06:42.561445  402335 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 20:06:42.574891  402335 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 20:06:42.593702  402335 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 20:06:42.610954  402335 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1014 20:06:42.628880  402335 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1014 20:06:42.643644  402335 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 20:06:42.829566  402335 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1014 20:06:43.123518  402335 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1014 20:06:43.123612  402335 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1014 20:06:43.130566  402335 start.go:563] Will wait 60s for crictl version
	I1014 20:06:43.130665  402335 ssh_runner.go:195] Run: which crictl
	I1014 20:06:43.135683  402335 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1014 20:06:43.191647  402335 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1014 20:06:43.191752  402335 ssh_runner.go:195] Run: crio --version
	I1014 20:06:43.230688  402335 ssh_runner.go:195] Run: crio --version
	I1014 20:06:43.268887  402335 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.29.1 ...
	I1014 20:06:43.270197  402335 main.go:141] libmachine: (pause-488160) Calling .GetIP
	I1014 20:06:43.274222  402335 main.go:141] libmachine: (pause-488160) DBG | domain pause-488160 has defined MAC address 52:54:00:13:50:45 in network mk-pause-488160
	I1014 20:06:43.274800  402335 main.go:141] libmachine: (pause-488160) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:50:45", ip: ""} in network mk-pause-488160: {Iface:virbr2 ExpiryTime:2025-10-14 21:04:58 +0000 UTC Type:0 Mac:52:54:00:13:50:45 Iaid: IPaddr:192.168.50.36 Prefix:24 Hostname:pause-488160 Clientid:01:52:54:00:13:50:45}
	I1014 20:06:43.274836  402335 main.go:141] libmachine: (pause-488160) DBG | domain pause-488160 has defined IP address 192.168.50.36 and MAC address 52:54:00:13:50:45 in network mk-pause-488160
	I1014 20:06:43.275181  402335 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I1014 20:06:43.281905  402335 kubeadm.go:883] updating cluster {Name:pause-488160 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1
ClusterName:pause-488160 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.36 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidi
a-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1014 20:06:43.282071  402335 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1014 20:06:43.282133  402335 ssh_runner.go:195] Run: sudo crictl images --output json
	I1014 20:06:43.345623  402335 crio.go:514] all images are preloaded for cri-o runtime.
	I1014 20:06:43.345654  402335 crio.go:433] Images already preloaded, skipping extraction
	I1014 20:06:43.345722  402335 ssh_runner.go:195] Run: sudo crictl images --output json
	I1014 20:06:43.400327  402335 crio.go:514] all images are preloaded for cri-o runtime.
	I1014 20:06:43.400365  402335 cache_images.go:85] Images are preloaded, skipping loading
	I1014 20:06:43.400384  402335 kubeadm.go:934] updating node { 192.168.50.36 8443 v1.34.1 crio true true} ...
	I1014 20:06:43.400524  402335 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=pause-488160 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.36
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:pause-488160 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1014 20:06:43.400610  402335 ssh_runner.go:195] Run: crio config
	I1014 20:06:43.465004  402335 cni.go:84] Creating CNI manager for ""
	I1014 20:06:43.465035  402335 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1014 20:06:43.465059  402335 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1014 20:06:43.465090  402335 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.36 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-488160 NodeName:pause-488160 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.36"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.36 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubern
etes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1014 20:06:43.465249  402335 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.36
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-488160"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.50.36"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.36"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1014 20:06:43.465352  402335 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1014 20:06:43.479763  402335 binaries.go:44] Found k8s binaries, skipping transfer
	I1014 20:06:43.479853  402335 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1014 20:06:43.495767  402335 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (311 bytes)
	I1014 20:06:43.524512  402335 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1014 20:06:43.555791  402335 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2212 bytes)
	I1014 20:06:43.580042  402335 ssh_runner.go:195] Run: grep 192.168.50.36	control-plane.minikube.internal$ /etc/hosts
	I1014 20:06:43.584664  402335 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 20:06:43.895659  402335 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1014 20:06:44.004354  402335 certs.go:69] Setting up /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/pause-488160 for IP: 192.168.50.36
	I1014 20:06:44.004387  402335 certs.go:195] generating shared ca certs ...
	I1014 20:06:44.004408  402335 certs.go:227] acquiring lock for ca certs: {Name:mkddeaa8fb7f14aff32554669329c3967650976a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 20:06:44.004629  402335 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21409-364627/.minikube/ca.key
	I1014 20:06:44.004717  402335 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21409-364627/.minikube/proxy-client-ca.key
	I1014 20:06:44.004737  402335 certs.go:257] generating profile certs ...
	I1014 20:06:44.004879  402335 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/pause-488160/client.key
	I1014 20:06:44.004987  402335 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/pause-488160/apiserver.key.4ffb12cb
	I1014 20:06:44.005054  402335 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/pause-488160/proxy-client.key
	I1014 20:06:44.005228  402335 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-364627/.minikube/certs/368634.pem (1338 bytes)
	W1014 20:06:44.005286  402335 certs.go:480] ignoring /home/jenkins/minikube-integration/21409-364627/.minikube/certs/368634_empty.pem, impossibly tiny 0 bytes
	I1014 20:06:44.005300  402335 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-364627/.minikube/certs/ca-key.pem (1675 bytes)
	I1014 20:06:44.005354  402335 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-364627/.minikube/certs/ca.pem (1082 bytes)
	I1014 20:06:44.005393  402335 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-364627/.minikube/certs/cert.pem (1123 bytes)
	I1014 20:06:44.005432  402335 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-364627/.minikube/certs/key.pem (1675 bytes)
	I1014 20:06:44.005498  402335 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-364627/.minikube/files/etc/ssl/certs/3686342.pem (1708 bytes)
	I1014 20:06:44.006458  402335 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-364627/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1014 20:06:44.120119  402335 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-364627/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1014 20:06:44.204360  402335 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-364627/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1014 20:06:44.257795  402335 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-364627/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1014 20:06:44.334024  402335 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/pause-488160/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1014 20:06:44.415284  402335 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/pause-488160/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1014 20:06:44.537148  402335 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/pause-488160/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1014 20:06:44.641794  402335 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/pause-488160/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1014 20:06:44.734600  402335 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-364627/.minikube/files/etc/ssl/certs/3686342.pem --> /usr/share/ca-certificates/3686342.pem (1708 bytes)
	I1014 20:06:44.857806  402335 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-364627/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1014 20:06:44.995767  402335 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-364627/.minikube/certs/368634.pem --> /usr/share/ca-certificates/368634.pem (1338 bytes)
	I1014 20:06:45.088906  402335 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1014 20:06:45.139546  402335 ssh_runner.go:195] Run: openssl version
	I1014 20:06:45.155227  402335 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3686342.pem && ln -fs /usr/share/ca-certificates/3686342.pem /etc/ssl/certs/3686342.pem"
	I1014 20:06:45.181826  402335 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3686342.pem
	I1014 20:06:45.192900  402335 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 14 19:18 /usr/share/ca-certificates/3686342.pem
	I1014 20:06:45.193005  402335 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3686342.pem
	I1014 20:06:45.205663  402335 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3686342.pem /etc/ssl/certs/3ec20f2e.0"
	I1014 20:06:45.243943  402335 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1014 20:06:45.275266  402335 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1014 20:06:45.285822  402335 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 14 19:11 /usr/share/ca-certificates/minikubeCA.pem
	I1014 20:06:45.285893  402335 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1014 20:06:45.300356  402335 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1014 20:06:45.322376  402335 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/368634.pem && ln -fs /usr/share/ca-certificates/368634.pem /etc/ssl/certs/368634.pem"
	I1014 20:06:45.349878  402335 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/368634.pem
	I1014 20:06:45.358130  402335 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 14 19:18 /usr/share/ca-certificates/368634.pem
	I1014 20:06:45.358224  402335 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/368634.pem
	I1014 20:06:45.369519  402335 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/368634.pem /etc/ssl/certs/51391683.0"
	I1014 20:06:45.387286  402335 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1014 20:06:45.395464  402335 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1014 20:06:45.406260  402335 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1014 20:06:45.415462  402335 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1014 20:06:45.431660  402335 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1014 20:06:45.453833  402335 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1014 20:06:45.472541  402335 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1014 20:06:45.488676  402335 kubeadm.go:400] StartCluster: {Name:pause-488160 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 Cl
usterName:pause-488160 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.36 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-g
pu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1014 20:06:45.488849  402335 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1014 20:06:45.488913  402335 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1014 20:06:45.595843  402335 cri.go:89] found id: "b1c20d90de825fa8bd5bbbc15a4636af8994f83f87b5182c20f342f198d2347f"
	I1014 20:06:45.595870  402335 cri.go:89] found id: "eabddd5066982ea30ce9d40af6c1e8f8f30f31bb4d6ead7cdf79191b79047493"
	I1014 20:06:45.595875  402335 cri.go:89] found id: "97414ca2b92dc105ff43b42a93d387443e43583ee0b56d5f09cdbbe38538a4d1"
	I1014 20:06:45.595896  402335 cri.go:89] found id: "4a64743e6066f645414b054d1b53606b4711f9fa27a608e5aa0d73868276377d"
	I1014 20:06:45.595901  402335 cri.go:89] found id: "cbfcc79a2721f61b4d433eece506d808e9fafd4dcf77ac2f88cd0f8acd1cb1df"
	I1014 20:06:45.595910  402335 cri.go:89] found id: "c7986305d3de16a9ce4cec6a583eab17b22910ed440b45fbb85c93b3fc93b8fe"
	I1014 20:06:45.595914  402335 cri.go:89] found id: "b8d5d635aebc8431ff09fbb97996c278d2a6b4986bca41eb3e2b0681bc56b587"
	I1014 20:06:45.595918  402335 cri.go:89] found id: "feaa3c08bc4bded6eaa7296f14e93eca6137a3b709ad5d2076b4ec6f03f6aeb5"
	I1014 20:06:45.595930  402335 cri.go:89] found id: "5caccb0282a4844cfa6b0109d55e4487e0c9a86e2864335d1dbed2a9a96eea34"
	I1014 20:06:45.595949  402335 cri.go:89] found id: "d84a43245bd6e2ce21fbb9335a0a0d9454392601bbec7de08c23200c3ebc8ea0"
	I1014 20:06:45.595954  402335 cri.go:89] found id: "16631c43f22f07c28b7a7f4f2287894b170562a4e3e17bbd806a7ce040e7df40"
	I1014 20:06:45.595958  402335 cri.go:89] found id: ""
	I1014 20:06:45.596019  402335 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
** /stderr **
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-488160 -n pause-488160
helpers_test.go:252: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p pause-488160 logs -n 25
I1014 20:07:20.550526  368634 install.go:138] Validating docker-machine-driver-kvm2, PATH=/tmp/TestKVMDriverInstallOrUpdate3562903331/001:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
I1014 20:07:20.571190  368634 install.go:163] /tmp/TestKVMDriverInstallOrUpdate3562903331/001/docker-machine-driver-kvm2 version is 1.37.0
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p pause-488160 logs -n 25: (1.504519132s)
helpers_test.go:260: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                ARGS                                                                                │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ delete  │ -p scheduled-stop-464504                                                                                                                                           │ scheduled-stop-464504     │ jenkins │ v1.37.0 │ 14 Oct 25 20:04 UTC │ 14 Oct 25 20:04 UTC │
	│ start   │ -p pause-488160 --memory=3072 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                │ pause-488160              │ jenkins │ v1.37.0 │ 14 Oct 25 20:04 UTC │ 14 Oct 25 20:06 UTC │
	│ start   │ -p NoKubernetes-280962 --no-kubernetes --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                            │ NoKubernetes-280962       │ jenkins │ v1.37.0 │ 14 Oct 25 20:04 UTC │                     │
	│ start   │ -p offline-crio-270302 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                        │ offline-crio-270302       │ jenkins │ v1.37.0 │ 14 Oct 25 20:04 UTC │ 14 Oct 25 20:05 UTC │
	│ start   │ -p NoKubernetes-280962 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                    │ NoKubernetes-280962       │ jenkins │ v1.37.0 │ 14 Oct 25 20:04 UTC │ 14 Oct 25 20:05 UTC │
	│ start   │ -p running-upgrade-370635 --memory=3072 --vm-driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                                     │ running-upgrade-370635    │ jenkins │ v1.32.0 │ 14 Oct 25 20:04 UTC │ 14 Oct 25 20:06 UTC │
	│ start   │ -p NoKubernetes-280962 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                    │ NoKubernetes-280962       │ jenkins │ v1.37.0 │ 14 Oct 25 20:05 UTC │ 14 Oct 25 20:05 UTC │
	│ delete  │ -p offline-crio-270302                                                                                                                                             │ offline-crio-270302       │ jenkins │ v1.37.0 │ 14 Oct 25 20:05 UTC │ 14 Oct 25 20:05 UTC │
	│ start   │ -p kubernetes-upgrade-425560 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false │ kubernetes-upgrade-425560 │ jenkins │ v1.37.0 │ 14 Oct 25 20:05 UTC │ 14 Oct 25 20:06 UTC │
	│ delete  │ -p NoKubernetes-280962                                                                                                                                             │ NoKubernetes-280962       │ jenkins │ v1.37.0 │ 14 Oct 25 20:05 UTC │ 14 Oct 25 20:05 UTC │
	│ start   │ -p NoKubernetes-280962 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                    │ NoKubernetes-280962       │ jenkins │ v1.37.0 │ 14 Oct 25 20:05 UTC │ 14 Oct 25 20:06 UTC │
	│ start   │ -p running-upgrade-370635 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                 │ running-upgrade-370635    │ jenkins │ v1.37.0 │ 14 Oct 25 20:06 UTC │ 14 Oct 25 20:06 UTC │
	│ start   │ -p pause-488160 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                                         │ pause-488160              │ jenkins │ v1.37.0 │ 14 Oct 25 20:06 UTC │ 14 Oct 25 20:07 UTC │
	│ stop    │ -p kubernetes-upgrade-425560                                                                                                                                       │ kubernetes-upgrade-425560 │ jenkins │ v1.37.0 │ 14 Oct 25 20:06 UTC │ 14 Oct 25 20:06 UTC │
	│ ssh     │ -p NoKubernetes-280962 sudo systemctl is-active --quiet service kubelet                                                                                            │ NoKubernetes-280962       │ jenkins │ v1.37.0 │ 14 Oct 25 20:06 UTC │                     │
	│ start   │ -p kubernetes-upgrade-425560 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false │ kubernetes-upgrade-425560 │ jenkins │ v1.37.0 │ 14 Oct 25 20:06 UTC │ 14 Oct 25 20:07 UTC │
	│ stop    │ -p NoKubernetes-280962                                                                                                                                             │ NoKubernetes-280962       │ jenkins │ v1.37.0 │ 14 Oct 25 20:06 UTC │ 14 Oct 25 20:06 UTC │
	│ start   │ -p NoKubernetes-280962 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                                                         │ NoKubernetes-280962       │ jenkins │ v1.37.0 │ 14 Oct 25 20:06 UTC │ 14 Oct 25 20:07 UTC │
	│ mount   │ /home/jenkins:/minikube-host --profile running-upgrade-370635 --v 0 --9p-version 9p2000.L --gid docker --ip  --msize 262144 --port 0 --type 9p --uid docker        │ running-upgrade-370635    │ jenkins │ v1.37.0 │ 14 Oct 25 20:06 UTC │                     │
	│ delete  │ -p running-upgrade-370635                                                                                                                                          │ running-upgrade-370635    │ jenkins │ v1.37.0 │ 14 Oct 25 20:06 UTC │ 14 Oct 25 20:06 UTC │
	│ start   │ -p force-systemd-env-702842 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                               │ force-systemd-env-702842  │ jenkins │ v1.37.0 │ 14 Oct 25 20:06 UTC │                     │
	│ start   │ -p kubernetes-upgrade-425560 --memory=3072 --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                        │ kubernetes-upgrade-425560 │ jenkins │ v1.37.0 │ 14 Oct 25 20:07 UTC │                     │
	│ start   │ -p kubernetes-upgrade-425560 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false │ kubernetes-upgrade-425560 │ jenkins │ v1.37.0 │ 14 Oct 25 20:07 UTC │                     │
	│ ssh     │ -p NoKubernetes-280962 sudo systemctl is-active --quiet service kubelet                                                                                            │ NoKubernetes-280962       │ jenkins │ v1.37.0 │ 14 Oct 25 20:07 UTC │                     │
	│ delete  │ -p NoKubernetes-280962                                                                                                                                             │ NoKubernetes-280962       │ jenkins │ v1.37.0 │ 14 Oct 25 20:07 UTC │ 14 Oct 25 20:07 UTC │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/14 20:07:14
	Running on machine: ubuntu-20-agent-6
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1014 20:07:14.503685  403592 out.go:360] Setting OutFile to fd 1 ...
	I1014 20:07:14.504033  403592 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1014 20:07:14.504075  403592 out.go:374] Setting ErrFile to fd 2...
	I1014 20:07:14.504160  403592 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1014 20:07:14.504717  403592 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-364627/.minikube/bin
	I1014 20:07:14.505826  403592 out.go:368] Setting JSON to false
	I1014 20:07:14.507104  403592 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":6577,"bootTime":1760465857,"procs":202,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1014 20:07:14.507204  403592 start.go:141] virtualization: kvm guest
	I1014 20:07:14.508704  403592 out.go:179] * [kubernetes-upgrade-425560] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1014 20:07:14.510111  403592 notify.go:220] Checking for updates...
	I1014 20:07:14.510127  403592 out.go:179]   - MINIKUBE_LOCATION=21409
	I1014 20:07:14.511440  403592 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1014 20:07:14.512665  403592 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21409-364627/kubeconfig
	I1014 20:07:14.513918  403592 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21409-364627/.minikube
	I1014 20:07:14.516456  403592 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1014 20:07:14.517621  403592 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1014 20:07:14.519281  403592 config.go:182] Loaded profile config "kubernetes-upgrade-425560": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1014 20:07:14.519873  403592 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 20:07:14.519933  403592 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 20:07:14.535685  403592 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39081
	I1014 20:07:14.536234  403592 main.go:141] libmachine: () Calling .GetVersion
	I1014 20:07:14.536851  403592 main.go:141] libmachine: Using API Version  1
	I1014 20:07:14.536879  403592 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 20:07:14.537217  403592 main.go:141] libmachine: () Calling .GetMachineName
	I1014 20:07:14.537389  403592 main.go:141] libmachine: (kubernetes-upgrade-425560) Calling .DriverName
	I1014 20:07:14.537694  403592 driver.go:421] Setting default libvirt URI to qemu:///system
	I1014 20:07:14.537996  403592 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 20:07:14.538040  403592 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 20:07:14.553302  403592 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38665
	I1014 20:07:14.553858  403592 main.go:141] libmachine: () Calling .GetVersion
	I1014 20:07:14.554409  403592 main.go:141] libmachine: Using API Version  1
	I1014 20:07:14.554431  403592 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 20:07:14.554760  403592 main.go:141] libmachine: () Calling .GetMachineName
	I1014 20:07:14.554999  403592 main.go:141] libmachine: (kubernetes-upgrade-425560) Calling .DriverName
	I1014 20:07:14.597449  403592 out.go:179] * Using the kvm2 driver based on existing profile
	I1014 20:07:14.598556  403592 start.go:305] selected driver: kvm2
	I1014 20:07:14.598580  403592 start.go:925] validating driver "kvm2" against &{Name:kubernetes-upgrade-425560 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.34.1 ClusterName:kubernetes-upgrade-425560 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.83.247 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bi
naryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1014 20:07:14.598725  403592 start.go:936] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1014 20:07:14.599477  403592 install.go:66] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1014 20:07:14.599605  403592 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21409-364627/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1014 20:07:14.617491  403592 install.go:163] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I1014 20:07:14.617538  403592 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21409-364627/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1014 20:07:14.634974  403592 install.go:163] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I1014 20:07:14.635406  403592 cni.go:84] Creating CNI manager for ""
	I1014 20:07:14.635464  403592 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1014 20:07:14.635501  403592 start.go:349] cluster config:
	{Name:kubernetes-upgrade-425560 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:kubernetes-upgrade-425560 Namespace:d
efault APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.83.247 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:
false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1014 20:07:14.635611  403592 iso.go:125] acquiring lock: {Name:mk340b9c7aeea9c9c1eaad0b6560c7e40df5ab00 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1014 20:07:14.637656  403592 out.go:179] * Starting "kubernetes-upgrade-425560" primary control-plane node in "kubernetes-upgrade-425560" cluster
	I1014 20:07:15.761944  403305 start.go:364] duration metric: took 25.249687053s to acquireMachinesLock for "force-systemd-env-702842"
	I1014 20:07:15.762045  403305 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-702842 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.34.1 ClusterName:force-systemd-env-702842 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Di
sableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1014 20:07:15.762219  403305 start.go:125] createHost starting for "" (driver="kvm2")
	I1014 20:07:14.410649  403014 main.go:141] libmachine: (NoKubernetes-280962) DBG | Getting to WaitForSSH function...
	I1014 20:07:14.414927  403014 main.go:141] libmachine: (NoKubernetes-280962) DBG | domain NoKubernetes-280962 has defined MAC address 52:54:00:bc:57:d8 in network mk-NoKubernetes-280962
	I1014 20:07:14.415553  403014 main.go:141] libmachine: (NoKubernetes-280962) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:57:d8", ip: ""} in network mk-NoKubernetes-280962: {Iface:virbr3 ExpiryTime:2025-10-14 21:07:11 +0000 UTC Type:0 Mac:52:54:00:bc:57:d8 Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:nokubernetes-280962 Clientid:01:52:54:00:bc:57:d8}
	I1014 20:07:14.415573  403014 main.go:141] libmachine: (NoKubernetes-280962) DBG | domain NoKubernetes-280962 has defined IP address 192.168.39.169 and MAC address 52:54:00:bc:57:d8 in network mk-NoKubernetes-280962
	I1014 20:07:14.415923  403014 main.go:141] libmachine: (NoKubernetes-280962) DBG | Using SSH client type: external
	I1014 20:07:14.415942  403014 main.go:141] libmachine: (NoKubernetes-280962) DBG | Using SSH private key: /home/jenkins/minikube-integration/21409-364627/.minikube/machines/NoKubernetes-280962/id_rsa (-rw-------)
	I1014 20:07:14.415969  403014 main.go:141] libmachine: (NoKubernetes-280962) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.169 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/21409-364627/.minikube/machines/NoKubernetes-280962/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1014 20:07:14.415977  403014 main.go:141] libmachine: (NoKubernetes-280962) DBG | About to run SSH command:
	I1014 20:07:14.415987  403014 main.go:141] libmachine: (NoKubernetes-280962) DBG | exit 0
	I1014 20:07:14.562668  403014 main.go:141] libmachine: (NoKubernetes-280962) DBG | SSH cmd err, output: <nil>: 
	I1014 20:07:14.562925  403014 main.go:141] libmachine: (NoKubernetes-280962) Calling .GetConfigRaw
	I1014 20:07:14.563711  403014 main.go:141] libmachine: (NoKubernetes-280962) Calling .GetIP
	I1014 20:07:14.566773  403014 main.go:141] libmachine: (NoKubernetes-280962) DBG | domain NoKubernetes-280962 has defined MAC address 52:54:00:bc:57:d8 in network mk-NoKubernetes-280962
	I1014 20:07:14.567365  403014 main.go:141] libmachine: (NoKubernetes-280962) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:57:d8", ip: ""} in network mk-NoKubernetes-280962: {Iface:virbr3 ExpiryTime:2025-10-14 21:07:11 +0000 UTC Type:0 Mac:52:54:00:bc:57:d8 Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:nokubernetes-280962 Clientid:01:52:54:00:bc:57:d8}
	I1014 20:07:14.567413  403014 main.go:141] libmachine: (NoKubernetes-280962) DBG | domain NoKubernetes-280962 has defined IP address 192.168.39.169 and MAC address 52:54:00:bc:57:d8 in network mk-NoKubernetes-280962
	I1014 20:07:14.567679  403014 profile.go:143] Saving config to /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/NoKubernetes-280962/config.json ...
	I1014 20:07:14.567932  403014 machine.go:93] provisionDockerMachine start ...
	I1014 20:07:14.567951  403014 main.go:141] libmachine: (NoKubernetes-280962) Calling .DriverName
	I1014 20:07:14.568231  403014 main.go:141] libmachine: (NoKubernetes-280962) Calling .GetSSHHostname
	I1014 20:07:14.571520  403014 main.go:141] libmachine: (NoKubernetes-280962) DBG | domain NoKubernetes-280962 has defined MAC address 52:54:00:bc:57:d8 in network mk-NoKubernetes-280962
	I1014 20:07:14.573819  403014 main.go:141] libmachine: (NoKubernetes-280962) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:57:d8", ip: ""} in network mk-NoKubernetes-280962: {Iface:virbr3 ExpiryTime:2025-10-14 21:07:11 +0000 UTC Type:0 Mac:52:54:00:bc:57:d8 Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:nokubernetes-280962 Clientid:01:52:54:00:bc:57:d8}
	I1014 20:07:14.573846  403014 main.go:141] libmachine: (NoKubernetes-280962) DBG | domain NoKubernetes-280962 has defined IP address 192.168.39.169 and MAC address 52:54:00:bc:57:d8 in network mk-NoKubernetes-280962
	I1014 20:07:14.574152  403014 main.go:141] libmachine: (NoKubernetes-280962) Calling .GetSSHPort
	I1014 20:07:14.574375  403014 main.go:141] libmachine: (NoKubernetes-280962) Calling .GetSSHKeyPath
	I1014 20:07:14.574562  403014 main.go:141] libmachine: (NoKubernetes-280962) Calling .GetSSHKeyPath
	I1014 20:07:14.574700  403014 main.go:141] libmachine: (NoKubernetes-280962) Calling .GetSSHUsername
	I1014 20:07:14.574897  403014 main.go:141] libmachine: Using SSH client type: native
	I1014 20:07:14.575125  403014 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.169 22 <nil> <nil>}
	I1014 20:07:14.575130  403014 main.go:141] libmachine: About to run SSH command:
	hostname
	I1014 20:07:14.698911  403014 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1014 20:07:14.698937  403014 main.go:141] libmachine: (NoKubernetes-280962) Calling .GetMachineName
	I1014 20:07:14.699272  403014 buildroot.go:166] provisioning hostname "NoKubernetes-280962"
	I1014 20:07:14.699301  403014 main.go:141] libmachine: (NoKubernetes-280962) Calling .GetMachineName
	I1014 20:07:14.699584  403014 main.go:141] libmachine: (NoKubernetes-280962) Calling .GetSSHHostname
	I1014 20:07:14.703356  403014 main.go:141] libmachine: (NoKubernetes-280962) DBG | domain NoKubernetes-280962 has defined MAC address 52:54:00:bc:57:d8 in network mk-NoKubernetes-280962
	I1014 20:07:14.703917  403014 main.go:141] libmachine: (NoKubernetes-280962) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:57:d8", ip: ""} in network mk-NoKubernetes-280962: {Iface:virbr3 ExpiryTime:2025-10-14 21:07:11 +0000 UTC Type:0 Mac:52:54:00:bc:57:d8 Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:nokubernetes-280962 Clientid:01:52:54:00:bc:57:d8}
	I1014 20:07:14.703938  403014 main.go:141] libmachine: (NoKubernetes-280962) DBG | domain NoKubernetes-280962 has defined IP address 192.168.39.169 and MAC address 52:54:00:bc:57:d8 in network mk-NoKubernetes-280962
	I1014 20:07:14.704102  403014 main.go:141] libmachine: (NoKubernetes-280962) Calling .GetSSHPort
	I1014 20:07:14.704361  403014 main.go:141] libmachine: (NoKubernetes-280962) Calling .GetSSHKeyPath
	I1014 20:07:14.704559  403014 main.go:141] libmachine: (NoKubernetes-280962) Calling .GetSSHKeyPath
	I1014 20:07:14.704712  403014 main.go:141] libmachine: (NoKubernetes-280962) Calling .GetSSHUsername
	I1014 20:07:14.704886  403014 main.go:141] libmachine: Using SSH client type: native
	I1014 20:07:14.705090  403014 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.169 22 <nil> <nil>}
	I1014 20:07:14.705096  403014 main.go:141] libmachine: About to run SSH command:
	sudo hostname NoKubernetes-280962 && echo "NoKubernetes-280962" | sudo tee /etc/hostname
	I1014 20:07:14.841172  403014 main.go:141] libmachine: SSH cmd err, output: <nil>: NoKubernetes-280962
	
	I1014 20:07:14.841201  403014 main.go:141] libmachine: (NoKubernetes-280962) Calling .GetSSHHostname
	I1014 20:07:14.845015  403014 main.go:141] libmachine: (NoKubernetes-280962) DBG | domain NoKubernetes-280962 has defined MAC address 52:54:00:bc:57:d8 in network mk-NoKubernetes-280962
	I1014 20:07:14.845403  403014 main.go:141] libmachine: (NoKubernetes-280962) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:57:d8", ip: ""} in network mk-NoKubernetes-280962: {Iface:virbr3 ExpiryTime:2025-10-14 21:07:11 +0000 UTC Type:0 Mac:52:54:00:bc:57:d8 Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:nokubernetes-280962 Clientid:01:52:54:00:bc:57:d8}
	I1014 20:07:14.845431  403014 main.go:141] libmachine: (NoKubernetes-280962) DBG | domain NoKubernetes-280962 has defined IP address 192.168.39.169 and MAC address 52:54:00:bc:57:d8 in network mk-NoKubernetes-280962
	I1014 20:07:14.845659  403014 main.go:141] libmachine: (NoKubernetes-280962) Calling .GetSSHPort
	I1014 20:07:14.845896  403014 main.go:141] libmachine: (NoKubernetes-280962) Calling .GetSSHKeyPath
	I1014 20:07:14.846081  403014 main.go:141] libmachine: (NoKubernetes-280962) Calling .GetSSHKeyPath
	I1014 20:07:14.846228  403014 main.go:141] libmachine: (NoKubernetes-280962) Calling .GetSSHUsername
	I1014 20:07:14.846440  403014 main.go:141] libmachine: Using SSH client type: native
	I1014 20:07:14.846654  403014 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.169 22 <nil> <nil>}
	I1014 20:07:14.846665  403014 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sNoKubernetes-280962' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 NoKubernetes-280962/g' /etc/hosts;
				else 
					echo '127.0.1.1 NoKubernetes-280962' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1014 20:07:14.975269  403014 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1014 20:07:14.975294  403014 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21409-364627/.minikube CaCertPath:/home/jenkins/minikube-integration/21409-364627/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21409-364627/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21409-364627/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21409-364627/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21409-364627/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21409-364627/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21409-364627/.minikube}
	I1014 20:07:14.975341  403014 buildroot.go:174] setting up certificates
	I1014 20:07:14.975377  403014 provision.go:84] configureAuth start
	I1014 20:07:14.975388  403014 main.go:141] libmachine: (NoKubernetes-280962) Calling .GetMachineName
	I1014 20:07:14.975723  403014 main.go:141] libmachine: (NoKubernetes-280962) Calling .GetIP
	I1014 20:07:14.978893  403014 main.go:141] libmachine: (NoKubernetes-280962) DBG | domain NoKubernetes-280962 has defined MAC address 52:54:00:bc:57:d8 in network mk-NoKubernetes-280962
	I1014 20:07:14.979354  403014 main.go:141] libmachine: (NoKubernetes-280962) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:57:d8", ip: ""} in network mk-NoKubernetes-280962: {Iface:virbr3 ExpiryTime:2025-10-14 21:07:11 +0000 UTC Type:0 Mac:52:54:00:bc:57:d8 Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:nokubernetes-280962 Clientid:01:52:54:00:bc:57:d8}
	I1014 20:07:14.979374  403014 main.go:141] libmachine: (NoKubernetes-280962) DBG | domain NoKubernetes-280962 has defined IP address 192.168.39.169 and MAC address 52:54:00:bc:57:d8 in network mk-NoKubernetes-280962
	I1014 20:07:14.979601  403014 main.go:141] libmachine: (NoKubernetes-280962) Calling .GetSSHHostname
	I1014 20:07:14.982706  403014 main.go:141] libmachine: (NoKubernetes-280962) DBG | domain NoKubernetes-280962 has defined MAC address 52:54:00:bc:57:d8 in network mk-NoKubernetes-280962
	I1014 20:07:14.983078  403014 main.go:141] libmachine: (NoKubernetes-280962) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:57:d8", ip: ""} in network mk-NoKubernetes-280962: {Iface:virbr3 ExpiryTime:2025-10-14 21:07:11 +0000 UTC Type:0 Mac:52:54:00:bc:57:d8 Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:nokubernetes-280962 Clientid:01:52:54:00:bc:57:d8}
	I1014 20:07:14.983121  403014 main.go:141] libmachine: (NoKubernetes-280962) DBG | domain NoKubernetes-280962 has defined IP address 192.168.39.169 and MAC address 52:54:00:bc:57:d8 in network mk-NoKubernetes-280962
	I1014 20:07:14.983244  403014 provision.go:143] copyHostCerts
	I1014 20:07:14.983304  403014 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-364627/.minikube/cert.pem, removing ...
	I1014 20:07:14.983332  403014 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-364627/.minikube/cert.pem
	I1014 20:07:14.983432  403014 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-364627/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21409-364627/.minikube/cert.pem (1123 bytes)
	I1014 20:07:14.983552  403014 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-364627/.minikube/key.pem, removing ...
	I1014 20:07:14.983557  403014 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-364627/.minikube/key.pem
	I1014 20:07:14.983601  403014 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-364627/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21409-364627/.minikube/key.pem (1675 bytes)
	I1014 20:07:14.983689  403014 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-364627/.minikube/ca.pem, removing ...
	I1014 20:07:14.983694  403014 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-364627/.minikube/ca.pem
	I1014 20:07:14.983731  403014 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-364627/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21409-364627/.minikube/ca.pem (1082 bytes)
	I1014 20:07:14.983787  403014 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21409-364627/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21409-364627/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21409-364627/.minikube/certs/ca-key.pem org=jenkins.NoKubernetes-280962 san=[127.0.0.1 192.168.39.169 NoKubernetes-280962 localhost minikube]
	I1014 20:07:15.033267  403014 provision.go:177] copyRemoteCerts
	I1014 20:07:15.033336  403014 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1014 20:07:15.033369  403014 main.go:141] libmachine: (NoKubernetes-280962) Calling .GetSSHHostname
	I1014 20:07:15.037006  403014 main.go:141] libmachine: (NoKubernetes-280962) DBG | domain NoKubernetes-280962 has defined MAC address 52:54:00:bc:57:d8 in network mk-NoKubernetes-280962
	I1014 20:07:15.037371  403014 main.go:141] libmachine: (NoKubernetes-280962) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:57:d8", ip: ""} in network mk-NoKubernetes-280962: {Iface:virbr3 ExpiryTime:2025-10-14 21:07:11 +0000 UTC Type:0 Mac:52:54:00:bc:57:d8 Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:nokubernetes-280962 Clientid:01:52:54:00:bc:57:d8}
	I1014 20:07:15.037387  403014 main.go:141] libmachine: (NoKubernetes-280962) DBG | domain NoKubernetes-280962 has defined IP address 192.168.39.169 and MAC address 52:54:00:bc:57:d8 in network mk-NoKubernetes-280962
	I1014 20:07:15.037628  403014 main.go:141] libmachine: (NoKubernetes-280962) Calling .GetSSHPort
	I1014 20:07:15.037821  403014 main.go:141] libmachine: (NoKubernetes-280962) Calling .GetSSHKeyPath
	I1014 20:07:15.037965  403014 main.go:141] libmachine: (NoKubernetes-280962) Calling .GetSSHUsername
	I1014 20:07:15.038217  403014 sshutil.go:53] new ssh client: &{IP:192.168.39.169 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21409-364627/.minikube/machines/NoKubernetes-280962/id_rsa Username:docker}
	I1014 20:07:15.132625  403014 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-364627/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1014 20:07:15.166911  403014 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-364627/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1014 20:07:15.199348  403014 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-364627/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1014 20:07:15.232051  403014 provision.go:87] duration metric: took 256.658126ms to configureAuth
	I1014 20:07:15.232080  403014 buildroot.go:189] setting minikube options for container-runtime
	I1014 20:07:15.232266  403014 config.go:182] Loaded profile config "NoKubernetes-280962": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v0.0.0
	I1014 20:07:15.232401  403014 main.go:141] libmachine: (NoKubernetes-280962) Calling .GetSSHHostname
	I1014 20:07:15.235938  403014 main.go:141] libmachine: (NoKubernetes-280962) DBG | domain NoKubernetes-280962 has defined MAC address 52:54:00:bc:57:d8 in network mk-NoKubernetes-280962
	I1014 20:07:15.236370  403014 main.go:141] libmachine: (NoKubernetes-280962) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:57:d8", ip: ""} in network mk-NoKubernetes-280962: {Iface:virbr3 ExpiryTime:2025-10-14 21:07:11 +0000 UTC Type:0 Mac:52:54:00:bc:57:d8 Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:nokubernetes-280962 Clientid:01:52:54:00:bc:57:d8}
	I1014 20:07:15.236393  403014 main.go:141] libmachine: (NoKubernetes-280962) DBG | domain NoKubernetes-280962 has defined IP address 192.168.39.169 and MAC address 52:54:00:bc:57:d8 in network mk-NoKubernetes-280962
	I1014 20:07:15.236679  403014 main.go:141] libmachine: (NoKubernetes-280962) Calling .GetSSHPort
	I1014 20:07:15.236871  403014 main.go:141] libmachine: (NoKubernetes-280962) Calling .GetSSHKeyPath
	I1014 20:07:15.237028  403014 main.go:141] libmachine: (NoKubernetes-280962) Calling .GetSSHKeyPath
	I1014 20:07:15.237161  403014 main.go:141] libmachine: (NoKubernetes-280962) Calling .GetSSHUsername
	I1014 20:07:15.237446  403014 main.go:141] libmachine: Using SSH client type: native
	I1014 20:07:15.237674  403014 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.169 22 <nil> <nil>}
	I1014 20:07:15.237686  403014 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1014 20:07:15.495629  403014 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1014 20:07:15.495644  403014 machine.go:96] duration metric: took 927.704754ms to provisionDockerMachine
	I1014 20:07:15.495655  403014 start.go:293] postStartSetup for "NoKubernetes-280962" (driver="kvm2")
	I1014 20:07:15.495663  403014 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1014 20:07:15.495680  403014 main.go:141] libmachine: (NoKubernetes-280962) Calling .DriverName
	I1014 20:07:15.496063  403014 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1014 20:07:15.496088  403014 main.go:141] libmachine: (NoKubernetes-280962) Calling .GetSSHHostname
	I1014 20:07:15.499485  403014 main.go:141] libmachine: (NoKubernetes-280962) DBG | domain NoKubernetes-280962 has defined MAC address 52:54:00:bc:57:d8 in network mk-NoKubernetes-280962
	I1014 20:07:15.500025  403014 main.go:141] libmachine: (NoKubernetes-280962) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:57:d8", ip: ""} in network mk-NoKubernetes-280962: {Iface:virbr3 ExpiryTime:2025-10-14 21:07:11 +0000 UTC Type:0 Mac:52:54:00:bc:57:d8 Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:nokubernetes-280962 Clientid:01:52:54:00:bc:57:d8}
	I1014 20:07:15.500057  403014 main.go:141] libmachine: (NoKubernetes-280962) DBG | domain NoKubernetes-280962 has defined IP address 192.168.39.169 and MAC address 52:54:00:bc:57:d8 in network mk-NoKubernetes-280962
	I1014 20:07:15.500307  403014 main.go:141] libmachine: (NoKubernetes-280962) Calling .GetSSHPort
	I1014 20:07:15.500570  403014 main.go:141] libmachine: (NoKubernetes-280962) Calling .GetSSHKeyPath
	I1014 20:07:15.500734  403014 main.go:141] libmachine: (NoKubernetes-280962) Calling .GetSSHUsername
	I1014 20:07:15.500869  403014 sshutil.go:53] new ssh client: &{IP:192.168.39.169 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21409-364627/.minikube/machines/NoKubernetes-280962/id_rsa Username:docker}
	I1014 20:07:15.591656  403014 ssh_runner.go:195] Run: cat /etc/os-release
	I1014 20:07:15.596672  403014 info.go:137] Remote host: Buildroot 2025.02
	I1014 20:07:15.596691  403014 filesync.go:126] Scanning /home/jenkins/minikube-integration/21409-364627/.minikube/addons for local assets ...
	I1014 20:07:15.596761  403014 filesync.go:126] Scanning /home/jenkins/minikube-integration/21409-364627/.minikube/files for local assets ...
	I1014 20:07:15.596829  403014 filesync.go:149] local asset: /home/jenkins/minikube-integration/21409-364627/.minikube/files/etc/ssl/certs/3686342.pem -> 3686342.pem in /etc/ssl/certs
	I1014 20:07:15.596908  403014 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1014 20:07:15.608663  403014 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-364627/.minikube/files/etc/ssl/certs/3686342.pem --> /etc/ssl/certs/3686342.pem (1708 bytes)
	I1014 20:07:15.639335  403014 start.go:296] duration metric: took 143.645022ms for postStartSetup
	I1014 20:07:15.639406  403014 fix.go:56] duration metric: took 17.184487135s for fixHost
	I1014 20:07:15.639437  403014 main.go:141] libmachine: (NoKubernetes-280962) Calling .GetSSHHostname
	I1014 20:07:15.642562  403014 main.go:141] libmachine: (NoKubernetes-280962) DBG | domain NoKubernetes-280962 has defined MAC address 52:54:00:bc:57:d8 in network mk-NoKubernetes-280962
	I1014 20:07:15.643138  403014 main.go:141] libmachine: (NoKubernetes-280962) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:57:d8", ip: ""} in network mk-NoKubernetes-280962: {Iface:virbr3 ExpiryTime:2025-10-14 21:07:11 +0000 UTC Type:0 Mac:52:54:00:bc:57:d8 Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:nokubernetes-280962 Clientid:01:52:54:00:bc:57:d8}
	I1014 20:07:15.643167  403014 main.go:141] libmachine: (NoKubernetes-280962) DBG | domain NoKubernetes-280962 has defined IP address 192.168.39.169 and MAC address 52:54:00:bc:57:d8 in network mk-NoKubernetes-280962
	I1014 20:07:15.643387  403014 main.go:141] libmachine: (NoKubernetes-280962) Calling .GetSSHPort
	I1014 20:07:15.643622  403014 main.go:141] libmachine: (NoKubernetes-280962) Calling .GetSSHKeyPath
	I1014 20:07:15.643819  403014 main.go:141] libmachine: (NoKubernetes-280962) Calling .GetSSHKeyPath
	I1014 20:07:15.643993  403014 main.go:141] libmachine: (NoKubernetes-280962) Calling .GetSSHUsername
	I1014 20:07:15.644209  403014 main.go:141] libmachine: Using SSH client type: native
	I1014 20:07:15.644470  403014 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.169 22 <nil> <nil>}
	I1014 20:07:15.644475  403014 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1014 20:07:15.761836  403014 main.go:141] libmachine: SSH cmd err, output: <nil>: 1760472435.722128913
	
	I1014 20:07:15.761849  403014 fix.go:216] guest clock: 1760472435.722128913
	I1014 20:07:15.761856  403014 fix.go:229] Guest: 2025-10-14 20:07:15.722128913 +0000 UTC Remote: 2025-10-14 20:07:15.639412586 +0000 UTC m=+38.278899362 (delta=82.716327ms)
	I1014 20:07:15.761875  403014 fix.go:200] guest clock delta is within tolerance: 82.716327ms
	I1014 20:07:15.761879  403014 start.go:83] releasing machines lock for "NoKubernetes-280962", held for 17.307017522s
	I1014 20:07:15.761904  403014 main.go:141] libmachine: (NoKubernetes-280962) Calling .DriverName
	I1014 20:07:15.762236  403014 main.go:141] libmachine: (NoKubernetes-280962) Calling .GetIP
	I1014 20:07:15.765938  403014 main.go:141] libmachine: (NoKubernetes-280962) DBG | domain NoKubernetes-280962 has defined MAC address 52:54:00:bc:57:d8 in network mk-NoKubernetes-280962
	I1014 20:07:15.766287  403014 main.go:141] libmachine: (NoKubernetes-280962) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:57:d8", ip: ""} in network mk-NoKubernetes-280962: {Iface:virbr3 ExpiryTime:2025-10-14 21:07:11 +0000 UTC Type:0 Mac:52:54:00:bc:57:d8 Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:nokubernetes-280962 Clientid:01:52:54:00:bc:57:d8}
	I1014 20:07:15.766324  403014 main.go:141] libmachine: (NoKubernetes-280962) DBG | domain NoKubernetes-280962 has defined IP address 192.168.39.169 and MAC address 52:54:00:bc:57:d8 in network mk-NoKubernetes-280962
	I1014 20:07:15.766506  403014 main.go:141] libmachine: (NoKubernetes-280962) Calling .DriverName
	I1014 20:07:15.767090  403014 main.go:141] libmachine: (NoKubernetes-280962) Calling .DriverName
	I1014 20:07:15.767373  403014 main.go:141] libmachine: (NoKubernetes-280962) Calling .DriverName
	I1014 20:07:15.767491  403014 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1014 20:07:15.767547  403014 main.go:141] libmachine: (NoKubernetes-280962) Calling .GetSSHHostname
	I1014 20:07:15.767582  403014 ssh_runner.go:195] Run: cat /version.json
	I1014 20:07:15.767602  403014 main.go:141] libmachine: (NoKubernetes-280962) Calling .GetSSHHostname
	I1014 20:07:15.771010  403014 main.go:141] libmachine: (NoKubernetes-280962) DBG | domain NoKubernetes-280962 has defined MAC address 52:54:00:bc:57:d8 in network mk-NoKubernetes-280962
	I1014 20:07:15.771244  403014 main.go:141] libmachine: (NoKubernetes-280962) DBG | domain NoKubernetes-280962 has defined MAC address 52:54:00:bc:57:d8 in network mk-NoKubernetes-280962
	I1014 20:07:15.771492  403014 main.go:141] libmachine: (NoKubernetes-280962) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:57:d8", ip: ""} in network mk-NoKubernetes-280962: {Iface:virbr3 ExpiryTime:2025-10-14 21:07:11 +0000 UTC Type:0 Mac:52:54:00:bc:57:d8 Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:nokubernetes-280962 Clientid:01:52:54:00:bc:57:d8}
	I1014 20:07:15.771518  403014 main.go:141] libmachine: (NoKubernetes-280962) DBG | domain NoKubernetes-280962 has defined IP address 192.168.39.169 and MAC address 52:54:00:bc:57:d8 in network mk-NoKubernetes-280962
	I1014 20:07:15.771769  403014 main.go:141] libmachine: (NoKubernetes-280962) Calling .GetSSHPort
	I1014 20:07:15.771770  403014 main.go:141] libmachine: (NoKubernetes-280962) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:57:d8", ip: ""} in network mk-NoKubernetes-280962: {Iface:virbr3 ExpiryTime:2025-10-14 21:07:11 +0000 UTC Type:0 Mac:52:54:00:bc:57:d8 Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:nokubernetes-280962 Clientid:01:52:54:00:bc:57:d8}
	I1014 20:07:15.771789  403014 main.go:141] libmachine: (NoKubernetes-280962) DBG | domain NoKubernetes-280962 has defined IP address 192.168.39.169 and MAC address 52:54:00:bc:57:d8 in network mk-NoKubernetes-280962
	I1014 20:07:15.772013  403014 main.go:141] libmachine: (NoKubernetes-280962) Calling .GetSSHPort
	I1014 20:07:15.772018  403014 main.go:141] libmachine: (NoKubernetes-280962) Calling .GetSSHKeyPath
	I1014 20:07:15.772210  403014 main.go:141] libmachine: (NoKubernetes-280962) Calling .GetSSHUsername
	I1014 20:07:15.772276  403014 main.go:141] libmachine: (NoKubernetes-280962) Calling .GetSSHKeyPath
	I1014 20:07:15.772442  403014 sshutil.go:53] new ssh client: &{IP:192.168.39.169 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21409-364627/.minikube/machines/NoKubernetes-280962/id_rsa Username:docker}
	I1014 20:07:15.772600  403014 main.go:141] libmachine: (NoKubernetes-280962) Calling .GetSSHUsername
	I1014 20:07:15.772860  403014 sshutil.go:53] new ssh client: &{IP:192.168.39.169 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21409-364627/.minikube/machines/NoKubernetes-280962/id_rsa Username:docker}
	I1014 20:07:15.888189  403014 ssh_runner.go:195] Run: systemctl --version
	I1014 20:07:15.895280  403014 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1014 20:07:16.045127  403014 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1014 20:07:16.053865  403014 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1014 20:07:16.053933  403014 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1014 20:07:16.075421  403014 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1014 20:07:16.075437  403014 start.go:495] detecting cgroup driver to use...
	I1014 20:07:16.075498  403014 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1014 20:07:16.097187  403014 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1014 20:07:16.118466  403014 docker.go:218] disabling cri-docker service (if available) ...
	I1014 20:07:16.118547  403014 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1014 20:07:16.138902  403014 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1014 20:07:16.158206  403014 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1014 20:07:16.320619  403014 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1014 20:07:16.550704  403014 docker.go:234] disabling docker service ...
	I1014 20:07:16.550792  403014 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1014 20:07:16.568463  403014 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1014 20:07:16.584268  403014 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1014 20:07:16.743503  403014 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1014 20:07:16.892091  403014 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1014 20:07:16.909382  403014 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1014 20:07:16.933200  403014 download.go:108] Downloading: https://dl.k8s.io/release/v0.0.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v0.0.0/bin/linux/amd64/kubeadm.sha1 -> /home/jenkins/minikube-integration/21409-364627/.minikube/cache/linux/amd64/v0.0.0/kubeadm
	W1014 20:07:14.553534  402335 pod_ready.go:104] pod "coredns-66bc5c9577-mkw7n" is not "Ready", error: <nil>
	W1014 20:07:16.555102  402335 pod_ready.go:104] pod "coredns-66bc5c9577-mkw7n" is not "Ready", error: <nil>
	I1014 20:07:17.554512  402335 pod_ready.go:94] pod "coredns-66bc5c9577-mkw7n" is "Ready"
	I1014 20:07:17.554564  402335 pod_ready.go:86] duration metric: took 9.507602066s for pod "coredns-66bc5c9577-mkw7n" in "kube-system" namespace to be "Ready" or be gone ...
	I1014 20:07:17.558817  402335 pod_ready.go:83] waiting for pod "etcd-pause-488160" in "kube-system" namespace to be "Ready" or be gone ...
	I1014 20:07:17.564691  402335 pod_ready.go:94] pod "etcd-pause-488160" is "Ready"
	I1014 20:07:17.564727  402335 pod_ready.go:86] duration metric: took 5.881414ms for pod "etcd-pause-488160" in "kube-system" namespace to be "Ready" or be gone ...
	I1014 20:07:17.567540  402335 pod_ready.go:83] waiting for pod "kube-apiserver-pause-488160" in "kube-system" namespace to be "Ready" or be gone ...
	I1014 20:07:17.574867  402335 pod_ready.go:94] pod "kube-apiserver-pause-488160" is "Ready"
	I1014 20:07:17.574899  402335 pod_ready.go:86] duration metric: took 7.331437ms for pod "kube-apiserver-pause-488160" in "kube-system" namespace to be "Ready" or be gone ...
	I1014 20:07:17.577238  402335 pod_ready.go:83] waiting for pod "kube-controller-manager-pause-488160" in "kube-system" namespace to be "Ready" or be gone ...
	I1014 20:07:17.459277  403014 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1014 20:07:17.459370  403014 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 20:07:17.480857  403014 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1014 20:07:17.480930  403014 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 20:07:17.499565  403014 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 20:07:17.512844  403014 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 20:07:17.528723  403014 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1014 20:07:17.543529  403014 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1014 20:07:17.556509  403014 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1014 20:07:17.556569  403014 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1014 20:07:17.581775  403014 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1014 20:07:17.596737  403014 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 20:07:17.767900  403014 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1014 20:07:17.904079  403014 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1014 20:07:17.904146  403014 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1014 20:07:17.912046  403014 start.go:563] Will wait 60s for crictl version
	I1014 20:07:17.912111  403014 ssh_runner.go:195] Run: which crictl
	I1014 20:07:17.917719  403014 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1014 20:07:17.968701  403014 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1014 20:07:17.968802  403014 ssh_runner.go:195] Run: crio --version
	I1014 20:07:18.012261  403014 ssh_runner.go:195] Run: crio --version
	I1014 20:07:18.055698  403014 out.go:179] * Preparing CRI-O 1.29.1 ...
	I1014 20:07:18.057398  403014 ssh_runner.go:195] Run: rm -f paused
	I1014 20:07:18.064892  403014 out.go:179] * Done! minikube is ready without Kubernetes!
	I1014 20:07:18.068810  403014 out.go:203] ╭───────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                       │
	│                        * Things to try without Kubernetes ...                         │
	│                                                                                       │
	│    - "minikube ssh" to SSH into minikube's node.                                      │
	│    - "minikube podman-env" to point your podman-cli to the podman inside minikube.    │
	│    - "minikube image" to build images without docker.                                 │
	│                                                                                       │
	╰───────────────────────────────────────────────────────────────────────────────────────╯
	I1014 20:07:17.752239  402335 pod_ready.go:94] pod "kube-controller-manager-pause-488160" is "Ready"
	I1014 20:07:17.752276  402335 pod_ready.go:86] duration metric: took 175.005622ms for pod "kube-controller-manager-pause-488160" in "kube-system" namespace to be "Ready" or be gone ...
	I1014 20:07:17.952542  402335 pod_ready.go:83] waiting for pod "kube-proxy-7g2cw" in "kube-system" namespace to be "Ready" or be gone ...
	I1014 20:07:18.351472  402335 pod_ready.go:94] pod "kube-proxy-7g2cw" is "Ready"
	I1014 20:07:18.351507  402335 pod_ready.go:86] duration metric: took 398.930477ms for pod "kube-proxy-7g2cw" in "kube-system" namespace to be "Ready" or be gone ...
	I1014 20:07:18.552073  402335 pod_ready.go:83] waiting for pod "kube-scheduler-pause-488160" in "kube-system" namespace to be "Ready" or be gone ...
	I1014 20:07:18.952897  402335 pod_ready.go:94] pod "kube-scheduler-pause-488160" is "Ready"
	I1014 20:07:18.952936  402335 pod_ready.go:86] duration metric: took 400.832438ms for pod "kube-scheduler-pause-488160" in "kube-system" namespace to be "Ready" or be gone ...
	I1014 20:07:18.952951  402335 pod_ready.go:40] duration metric: took 10.912499429s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1014 20:07:19.023371  402335 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1014 20:07:19.024914  402335 out.go:179] * Done! kubectl is now configured to use "pause-488160" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 14 20:07:19 pause-488160 crio[2570]: time="2025-10-14 20:07:19.885817429Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=293d4266-e3c3-4c47-b824-cb44e21fc153 name=/runtime.v1.RuntimeService/Version
	Oct 14 20:07:19 pause-488160 crio[2570]: time="2025-10-14 20:07:19.888201357Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=029e60f7-dcbc-4ce3-bc0d-5f10f66a6cc9 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 14 20:07:19 pause-488160 crio[2570]: time="2025-10-14 20:07:19.888607065Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1760472439888584755,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:127412,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=029e60f7-dcbc-4ce3-bc0d-5f10f66a6cc9 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 14 20:07:19 pause-488160 crio[2570]: time="2025-10-14 20:07:19.889287817Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0e87735c-ec65-41a9-981d-8cd87e610ab5 name=/runtime.v1.RuntimeService/ListContainers
	Oct 14 20:07:19 pause-488160 crio[2570]: time="2025-10-14 20:07:19.889430851Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0e87735c-ec65-41a9-981d-8cd87e610ab5 name=/runtime.v1.RuntimeService/ListContainers
	Oct 14 20:07:19 pause-488160 crio[2570]: time="2025-10-14 20:07:19.890203120Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4e90ac414d78a1f7ee0f90c04a73687faad5480589d8567f5e6c48bdef43a6a4,PodSandboxId:378ca24845c9c55459da4685657f467f762ceb02ddde3797922c929259c0c5db,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1760472425263111947,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7g2cw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4d4af20d-b366-4ed8-a198-6aff03448749,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f7ba9e8060d98331b2584f3956669dcca5b0268d9d133e109d6795baff25d702,PodSandboxId:9e7190300541f2937ce4f890983738bfdf2d2f16108dc5734b8439784cede27a,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1760472425268026088,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-mkw7n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 36f66181-b789-42ba-8a7f-4d680d697982,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",
\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:00096d625a013fe1fa67e0339e9549ec14f0e749605b186df08a717f499dba2f,PodSandboxId:9e778b6c8388edcada8faa5ed5c6a844f1a4fb42bea804edfa88d05234bbd207,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1760472420714773672,Labels:map[string]string{io.kubernetes.container
.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-488160,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 17bac2de099c8b85a00c8e835ae46407,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5ec14772e0933d1d2be144b5392733a04f3da0591a5c769db30f6a953353409f,PodSandboxId:fe3903e8317b807180207df83a389ea1b25526c10747790e0012ee4ac1c25cdc,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c283373
88d0619538f,State:CONTAINER_RUNNING,CreatedAt:1760472420676673504,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-488160,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2841f507ca0337d51963ec3de35897b9,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:69d4f2f9551e1d08ed65e858819057880db670c17f70d9cbd4a714290c09867c,PodSandboxId:304406956b58f5530f3c92e24307a15a175764555318c78deb0855f6be512929,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotatio
ns:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1760472420664312846,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-488160,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 562e0710eb923a2b69cc36a87e0635c4,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:355aef60e9fa32ea21dd61248eccd4c25c356b1a1a7dfecaac9f92587a38363b,PodSandboxId:0f73f48ed91713353875b2d7f5e7a783c0f497ff51339ad7cb65b8e795187dcd,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Imag
e:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1760472416366880926,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-488160,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83103fec4be4832c85d6356f6f0d2e52,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a7673602dd13a26d88d5b8cc01cb5805084d98654c1181e0ca52246c245d6871,PodSandboxId:9e7190300541f2937ce4f890983738bfdf2d2f16108dc5
734b8439784cede27a,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1760472405527099615,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-mkw7n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 36f66181-b789-42ba-8a7f-4d680d697982,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubern
etes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b1c20d90de825fa8bd5bbbc15a4636af8994f83f87b5182c20f342f198d2347f,PodSandboxId:378ca24845c9c55459da4685657f467f762ceb02ddde3797922c929259c0c5db,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_EXITED,CreatedAt:1760472404539837122,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7g2cw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4d4af20d-b366-4ed8-a198-6aff03448749,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 1,io.
kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eabddd5066982ea30ce9d40af6c1e8f8f30f31bb4d6ead7cdf79191b79047493,PodSandboxId:0f73f48ed91713353875b2d7f5e7a783c0f497ff51339ad7cb65b8e795187dcd,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_EXITED,CreatedAt:1760472404500974816,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-488160,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83103fec4be4832c85d6356f6f0d2e52,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\"
:2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97414ca2b92dc105ff43b42a93d387443e43583ee0b56d5f09cdbbe38538a4d1,PodSandboxId:304406956b58f5530f3c92e24307a15a175764555318c78deb0855f6be512929,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_EXITED,CreatedAt:1760472404479795826,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-488160,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 562e0710eb923a2b69cc36a87e0635c4,},Annotations:map[string]string{io.kubernetes.container.hash:
d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4a64743e6066f645414b054d1b53606b4711f9fa27a608e5aa0d73868276377d,PodSandboxId:fe3903e8317b807180207df83a389ea1b25526c10747790e0012ee4ac1c25cdc,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_EXITED,CreatedAt:1760472404415856217,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-488160,io.kubernetes.pod.namespace:
kube-system,io.kubernetes.pod.uid: 2841f507ca0337d51963ec3de35897b9,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cbfcc79a2721f61b4d433eece506d808e9fafd4dcf77ac2f88cd0f8acd1cb1df,PodSandboxId:9e778b6c8388edcada8faa5ed5c6a844f1a4fb42bea804edfa88d05234bbd207,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_EXITED,CreatedAt:1760472404229860706,Labels:map[string]string{io.kubernetes.containe
r.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-488160,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 17bac2de099c8b85a00c8e835ae46407,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=0e87735c-ec65-41a9-981d-8cd87e610ab5 name=/runtime.v1.RuntimeService/ListContainers
	Oct 14 20:07:19 pause-488160 crio[2570]: time="2025-10-14 20:07:19.946108759Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=694cd2ba-3321-4a01-b345-7363a6cb3481 name=/runtime.v1.RuntimeService/Version
	Oct 14 20:07:19 pause-488160 crio[2570]: time="2025-10-14 20:07:19.946368222Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=694cd2ba-3321-4a01-b345-7363a6cb3481 name=/runtime.v1.RuntimeService/Version
	Oct 14 20:07:19 pause-488160 crio[2570]: time="2025-10-14 20:07:19.948999809Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=b6db6065-62a0-4360-b8e3-6dcfebfc43d4 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 14 20:07:19 pause-488160 crio[2570]: time="2025-10-14 20:07:19.949490610Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1760472439949460898,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:127412,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b6db6065-62a0-4360-b8e3-6dcfebfc43d4 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 14 20:07:19 pause-488160 crio[2570]: time="2025-10-14 20:07:19.952939315Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8430f847-de8c-408e-ab47-4787a82249ce name=/runtime.v1.RuntimeService/ListContainers
	Oct 14 20:07:19 pause-488160 crio[2570]: time="2025-10-14 20:07:19.953428814Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8430f847-de8c-408e-ab47-4787a82249ce name=/runtime.v1.RuntimeService/ListContainers
	Oct 14 20:07:19 pause-488160 crio[2570]: time="2025-10-14 20:07:19.953776531Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4e90ac414d78a1f7ee0f90c04a73687faad5480589d8567f5e6c48bdef43a6a4,PodSandboxId:378ca24845c9c55459da4685657f467f762ceb02ddde3797922c929259c0c5db,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1760472425263111947,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7g2cw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4d4af20d-b366-4ed8-a198-6aff03448749,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f7ba9e8060d98331b2584f3956669dcca5b0268d9d133e109d6795baff25d702,PodSandboxId:9e7190300541f2937ce4f890983738bfdf2d2f16108dc5734b8439784cede27a,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1760472425268026088,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-mkw7n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 36f66181-b789-42ba-8a7f-4d680d697982,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",
\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:00096d625a013fe1fa67e0339e9549ec14f0e749605b186df08a717f499dba2f,PodSandboxId:9e778b6c8388edcada8faa5ed5c6a844f1a4fb42bea804edfa88d05234bbd207,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1760472420714773672,Labels:map[string]string{io.kubernetes.container
.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-488160,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 17bac2de099c8b85a00c8e835ae46407,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5ec14772e0933d1d2be144b5392733a04f3da0591a5c769db30f6a953353409f,PodSandboxId:fe3903e8317b807180207df83a389ea1b25526c10747790e0012ee4ac1c25cdc,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c283373
88d0619538f,State:CONTAINER_RUNNING,CreatedAt:1760472420676673504,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-488160,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2841f507ca0337d51963ec3de35897b9,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:69d4f2f9551e1d08ed65e858819057880db670c17f70d9cbd4a714290c09867c,PodSandboxId:304406956b58f5530f3c92e24307a15a175764555318c78deb0855f6be512929,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotatio
ns:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1760472420664312846,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-488160,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 562e0710eb923a2b69cc36a87e0635c4,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:355aef60e9fa32ea21dd61248eccd4c25c356b1a1a7dfecaac9f92587a38363b,PodSandboxId:0f73f48ed91713353875b2d7f5e7a783c0f497ff51339ad7cb65b8e795187dcd,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Imag
e:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1760472416366880926,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-488160,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83103fec4be4832c85d6356f6f0d2e52,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a7673602dd13a26d88d5b8cc01cb5805084d98654c1181e0ca52246c245d6871,PodSandboxId:9e7190300541f2937ce4f890983738bfdf2d2f16108dc5
734b8439784cede27a,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1760472405527099615,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-mkw7n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 36f66181-b789-42ba-8a7f-4d680d697982,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubern
etes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b1c20d90de825fa8bd5bbbc15a4636af8994f83f87b5182c20f342f198d2347f,PodSandboxId:378ca24845c9c55459da4685657f467f762ceb02ddde3797922c929259c0c5db,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_EXITED,CreatedAt:1760472404539837122,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7g2cw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4d4af20d-b366-4ed8-a198-6aff03448749,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 1,io.
kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eabddd5066982ea30ce9d40af6c1e8f8f30f31bb4d6ead7cdf79191b79047493,PodSandboxId:0f73f48ed91713353875b2d7f5e7a783c0f497ff51339ad7cb65b8e795187dcd,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_EXITED,CreatedAt:1760472404500974816,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-488160,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83103fec4be4832c85d6356f6f0d2e52,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\"
:2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97414ca2b92dc105ff43b42a93d387443e43583ee0b56d5f09cdbbe38538a4d1,PodSandboxId:304406956b58f5530f3c92e24307a15a175764555318c78deb0855f6be512929,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_EXITED,CreatedAt:1760472404479795826,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-488160,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 562e0710eb923a2b69cc36a87e0635c4,},Annotations:map[string]string{io.kubernetes.container.hash:
d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4a64743e6066f645414b054d1b53606b4711f9fa27a608e5aa0d73868276377d,PodSandboxId:fe3903e8317b807180207df83a389ea1b25526c10747790e0012ee4ac1c25cdc,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_EXITED,CreatedAt:1760472404415856217,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-488160,io.kubernetes.pod.namespace:
kube-system,io.kubernetes.pod.uid: 2841f507ca0337d51963ec3de35897b9,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cbfcc79a2721f61b4d433eece506d808e9fafd4dcf77ac2f88cd0f8acd1cb1df,PodSandboxId:9e778b6c8388edcada8faa5ed5c6a844f1a4fb42bea804edfa88d05234bbd207,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_EXITED,CreatedAt:1760472404229860706,Labels:map[string]string{io.kubernetes.containe
r.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-488160,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 17bac2de099c8b85a00c8e835ae46407,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=8430f847-de8c-408e-ab47-4787a82249ce name=/runtime.v1.RuntimeService/ListContainers
	Oct 14 20:07:20 pause-488160 crio[2570]: time="2025-10-14 20:07:20.005440902Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=18bca77a-aca8-456c-9edd-d334c7ee50e2 name=/runtime.v1.RuntimeService/Version
	Oct 14 20:07:20 pause-488160 crio[2570]: time="2025-10-14 20:07:20.005550034Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=18bca77a-aca8-456c-9edd-d334c7ee50e2 name=/runtime.v1.RuntimeService/Version
	Oct 14 20:07:20 pause-488160 crio[2570]: time="2025-10-14 20:07:20.007634182Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=57c6d84b-ef12-4faa-b7f1-bbfbb07396af name=/runtime.v1.ImageService/ImageFsInfo
	Oct 14 20:07:20 pause-488160 crio[2570]: time="2025-10-14 20:07:20.008465566Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1760472440008376001,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:127412,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=57c6d84b-ef12-4faa-b7f1-bbfbb07396af name=/runtime.v1.ImageService/ImageFsInfo
	Oct 14 20:07:20 pause-488160 crio[2570]: time="2025-10-14 20:07:20.009490955Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e3503923-e87b-41f0-aa1e-5cc8ae91701c name=/runtime.v1.RuntimeService/ListContainers
	Oct 14 20:07:20 pause-488160 crio[2570]: time="2025-10-14 20:07:20.009604009Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e3503923-e87b-41f0-aa1e-5cc8ae91701c name=/runtime.v1.RuntimeService/ListContainers
	Oct 14 20:07:20 pause-488160 crio[2570]: time="2025-10-14 20:07:20.009928467Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4e90ac414d78a1f7ee0f90c04a73687faad5480589d8567f5e6c48bdef43a6a4,PodSandboxId:378ca24845c9c55459da4685657f467f762ceb02ddde3797922c929259c0c5db,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1760472425263111947,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7g2cw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4d4af20d-b366-4ed8-a198-6aff03448749,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f7ba9e8060d98331b2584f3956669dcca5b0268d9d133e109d6795baff25d702,PodSandboxId:9e7190300541f2937ce4f890983738bfdf2d2f16108dc5734b8439784cede27a,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1760472425268026088,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-mkw7n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 36f66181-b789-42ba-8a7f-4d680d697982,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",
\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:00096d625a013fe1fa67e0339e9549ec14f0e749605b186df08a717f499dba2f,PodSandboxId:9e778b6c8388edcada8faa5ed5c6a844f1a4fb42bea804edfa88d05234bbd207,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1760472420714773672,Labels:map[string]string{io.kubernetes.container
.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-488160,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 17bac2de099c8b85a00c8e835ae46407,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5ec14772e0933d1d2be144b5392733a04f3da0591a5c769db30f6a953353409f,PodSandboxId:fe3903e8317b807180207df83a389ea1b25526c10747790e0012ee4ac1c25cdc,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c283373
88d0619538f,State:CONTAINER_RUNNING,CreatedAt:1760472420676673504,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-488160,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2841f507ca0337d51963ec3de35897b9,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:69d4f2f9551e1d08ed65e858819057880db670c17f70d9cbd4a714290c09867c,PodSandboxId:304406956b58f5530f3c92e24307a15a175764555318c78deb0855f6be512929,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotatio
ns:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1760472420664312846,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-488160,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 562e0710eb923a2b69cc36a87e0635c4,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:355aef60e9fa32ea21dd61248eccd4c25c356b1a1a7dfecaac9f92587a38363b,PodSandboxId:0f73f48ed91713353875b2d7f5e7a783c0f497ff51339ad7cb65b8e795187dcd,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Imag
e:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1760472416366880926,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-488160,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83103fec4be4832c85d6356f6f0d2e52,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a7673602dd13a26d88d5b8cc01cb5805084d98654c1181e0ca52246c245d6871,PodSandboxId:9e7190300541f2937ce4f890983738bfdf2d2f16108dc5
734b8439784cede27a,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1760472405527099615,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-mkw7n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 36f66181-b789-42ba-8a7f-4d680d697982,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubern
etes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b1c20d90de825fa8bd5bbbc15a4636af8994f83f87b5182c20f342f198d2347f,PodSandboxId:378ca24845c9c55459da4685657f467f762ceb02ddde3797922c929259c0c5db,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_EXITED,CreatedAt:1760472404539837122,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7g2cw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4d4af20d-b366-4ed8-a198-6aff03448749,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 1,io.
kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eabddd5066982ea30ce9d40af6c1e8f8f30f31bb4d6ead7cdf79191b79047493,PodSandboxId:0f73f48ed91713353875b2d7f5e7a783c0f497ff51339ad7cb65b8e795187dcd,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_EXITED,CreatedAt:1760472404500974816,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-488160,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83103fec4be4832c85d6356f6f0d2e52,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\"
:2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97414ca2b92dc105ff43b42a93d387443e43583ee0b56d5f09cdbbe38538a4d1,PodSandboxId:304406956b58f5530f3c92e24307a15a175764555318c78deb0855f6be512929,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_EXITED,CreatedAt:1760472404479795826,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-488160,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 562e0710eb923a2b69cc36a87e0635c4,},Annotations:map[string]string{io.kubernetes.container.hash:
d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4a64743e6066f645414b054d1b53606b4711f9fa27a608e5aa0d73868276377d,PodSandboxId:fe3903e8317b807180207df83a389ea1b25526c10747790e0012ee4ac1c25cdc,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_EXITED,CreatedAt:1760472404415856217,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-488160,io.kubernetes.pod.namespace:
kube-system,io.kubernetes.pod.uid: 2841f507ca0337d51963ec3de35897b9,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cbfcc79a2721f61b4d433eece506d808e9fafd4dcf77ac2f88cd0f8acd1cb1df,PodSandboxId:9e778b6c8388edcada8faa5ed5c6a844f1a4fb42bea804edfa88d05234bbd207,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_EXITED,CreatedAt:1760472404229860706,Labels:map[string]string{io.kubernetes.containe
r.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-488160,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 17bac2de099c8b85a00c8e835ae46407,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e3503923-e87b-41f0-aa1e-5cc8ae91701c name=/runtime.v1.RuntimeService/ListContainers
	Oct 14 20:07:20 pause-488160 crio[2570]: time="2025-10-14 20:07:20.031762001Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:&PodSandboxFilter{Id:,State:&PodSandboxStateValue{State:SANDBOX_READY,},LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ed0f04b8-81ea-4d66-a600-d23f31c35b98 name=/runtime.v1.RuntimeService/ListPodSandbox
	Oct 14 20:07:20 pause-488160 crio[2570]: time="2025-10-14 20:07:20.032041414Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:9e7190300541f2937ce4f890983738bfdf2d2f16108dc5734b8439784cede27a,Metadata:&PodSandboxMetadata{Name:coredns-66bc5c9577-mkw7n,Uid:36f66181-b789-42ba-8a7f-4d680d697982,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1760472404016516202,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-66bc5c9577-mkw7n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 36f66181-b789-42ba-8a7f-4d680d697982,k8s-app: kube-dns,pod-template-hash: 66bc5c9577,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-10-14T20:05:26.918064613Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:fe3903e8317b807180207df83a389ea1b25526c10747790e0012ee4ac1c25cdc,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-pause-488160,Uid:2841f507ca0337d51963ec3de35897b9,Namespace:kub
e-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1760472403806992157,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-pause-488160,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2841f507ca0337d51963ec3de35897b9,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 2841f507ca0337d51963ec3de35897b9,kubernetes.io/config.seen: 2025-10-14T20:05:21.196780081Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:0f73f48ed91713353875b2d7f5e7a783c0f497ff51339ad7cb65b8e795187dcd,Metadata:&PodSandboxMetadata{Name:etcd-pause-488160,Uid:83103fec4be4832c85d6356f6f0d2e52,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1760472403804260842,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-pause-488160,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83103fec4be4832c85d6356f6f0d2e52,tier: cont
rol-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.50.36:2379,kubernetes.io/config.hash: 83103fec4be4832c85d6356f6f0d2e52,kubernetes.io/config.seen: 2025-10-14T20:05:21.196769354Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:378ca24845c9c55459da4685657f467f762ceb02ddde3797922c929259c0c5db,Metadata:&PodSandboxMetadata{Name:kube-proxy-7g2cw,Uid:4d4af20d-b366-4ed8-a198-6aff03448749,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1760472403800125950,Labels:map[string]string{controller-revision-hash: 66486579fc,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-7g2cw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4d4af20d-b366-4ed8-a198-6aff03448749,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-10-14T20:05:26.830422059Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:304406956b58f5530f3c92e24307a15a17
5764555318c78deb0855f6be512929,Metadata:&PodSandboxMetadata{Name:kube-apiserver-pause-488160,Uid:562e0710eb923a2b69cc36a87e0635c4,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1760472403792801514,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-pause-488160,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 562e0710eb923a2b69cc36a87e0635c4,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.50.36:8443,kubernetes.io/config.hash: 562e0710eb923a2b69cc36a87e0635c4,kubernetes.io/config.seen: 2025-10-14T20:05:21.196779019Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:9e778b6c8388edcada8faa5ed5c6a844f1a4fb42bea804edfa88d05234bbd207,Metadata:&PodSandboxMetadata{Name:kube-scheduler-pause-488160,Uid:17bac2de099c8b85a00c8e835ae46407,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1760472403731944950,Label
s:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-pause-488160,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 17bac2de099c8b85a00c8e835ae46407,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 17bac2de099c8b85a00c8e835ae46407,kubernetes.io/config.seen: 2025-10-14T20:05:21.196780918Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=ed0f04b8-81ea-4d66-a600-d23f31c35b98 name=/runtime.v1.RuntimeService/ListPodSandbox
	Oct 14 20:07:20 pause-488160 crio[2570]: time="2025-10-14 20:07:20.033464451Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:&ContainerStateValue{State:CONTAINER_RUNNING,},PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=256e6ca8-4be4-4e76-b527-7527ea673105 name=/runtime.v1.RuntimeService/ListContainers
	Oct 14 20:07:20 pause-488160 crio[2570]: time="2025-10-14 20:07:20.033609731Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=256e6ca8-4be4-4e76-b527-7527ea673105 name=/runtime.v1.RuntimeService/ListContainers
	Oct 14 20:07:20 pause-488160 crio[2570]: time="2025-10-14 20:07:20.033856122Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4e90ac414d78a1f7ee0f90c04a73687faad5480589d8567f5e6c48bdef43a6a4,PodSandboxId:378ca24845c9c55459da4685657f467f762ceb02ddde3797922c929259c0c5db,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1760472425263111947,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7g2cw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4d4af20d-b366-4ed8-a198-6aff03448749,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f7ba9e8060d98331b2584f3956669dcca5b0268d9d133e109d6795baff25d702,PodSandboxId:9e7190300541f2937ce4f890983738bfdf2d2f16108dc5734b8439784cede27a,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1760472425268026088,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-mkw7n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 36f66181-b789-42ba-8a7f-4d680d697982,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",
\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:00096d625a013fe1fa67e0339e9549ec14f0e749605b186df08a717f499dba2f,PodSandboxId:9e778b6c8388edcada8faa5ed5c6a844f1a4fb42bea804edfa88d05234bbd207,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1760472420714773672,Labels:map[string]string{io.kubernetes.container
.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-488160,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 17bac2de099c8b85a00c8e835ae46407,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5ec14772e0933d1d2be144b5392733a04f3da0591a5c769db30f6a953353409f,PodSandboxId:fe3903e8317b807180207df83a389ea1b25526c10747790e0012ee4ac1c25cdc,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c283373
88d0619538f,State:CONTAINER_RUNNING,CreatedAt:1760472420676673504,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-488160,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2841f507ca0337d51963ec3de35897b9,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:69d4f2f9551e1d08ed65e858819057880db670c17f70d9cbd4a714290c09867c,PodSandboxId:304406956b58f5530f3c92e24307a15a175764555318c78deb0855f6be512929,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotatio
ns:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1760472420664312846,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-488160,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 562e0710eb923a2b69cc36a87e0635c4,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:355aef60e9fa32ea21dd61248eccd4c25c356b1a1a7dfecaac9f92587a38363b,PodSandboxId:0f73f48ed91713353875b2d7f5e7a783c0f497ff51339ad7cb65b8e795187dcd,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Imag
e:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1760472416366880926,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-488160,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83103fec4be4832c85d6356f6f0d2e52,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=256e6ca8-4be4-4e76-b527-7527ea673105 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	f7ba9e8060d98       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969   14 seconds ago      Running             coredns                   2                   9e7190300541f       coredns-66bc5c9577-mkw7n
	4e90ac414d78a       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7   14 seconds ago      Running             kube-proxy                2                   378ca24845c9c       kube-proxy-7g2cw
	00096d625a013       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813   19 seconds ago      Running             kube-scheduler            2                   9e778b6c8388e       kube-scheduler-pause-488160
	5ec14772e0933       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f   19 seconds ago      Running             kube-controller-manager   2                   fe3903e8317b8       kube-controller-manager-pause-488160
	69d4f2f9551e1       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97   19 seconds ago      Running             kube-apiserver            2                   304406956b58f       kube-apiserver-pause-488160
	355aef60e9fa3       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115   23 seconds ago      Running             etcd                      2                   0f73f48ed9171       etcd-pause-488160
	a7673602dd13a       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969   34 seconds ago      Exited              coredns                   1                   9e7190300541f       coredns-66bc5c9577-mkw7n
	b1c20d90de825       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7   35 seconds ago      Exited              kube-proxy                1                   378ca24845c9c       kube-proxy-7g2cw
	eabddd5066982       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115   35 seconds ago      Exited              etcd                      1                   0f73f48ed9171       etcd-pause-488160
	97414ca2b92dc       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97   35 seconds ago      Exited              kube-apiserver            1                   304406956b58f       kube-apiserver-pause-488160
	4a64743e6066f       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f   35 seconds ago      Exited              kube-controller-manager   1                   fe3903e8317b8       kube-controller-manager-pause-488160
	cbfcc79a2721f       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813   35 seconds ago      Exited              kube-scheduler            1                   9e778b6c8388e       kube-scheduler-pause-488160
	
	
	==> coredns [a7673602dd13a26d88d5b8cc01cb5805084d98654c1181e0ca52246c245d6871] <==
	
	
	==> coredns [f7ba9e8060d98331b2584f3956669dcca5b0268d9d133e109d6795baff25d702] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6e77f21cd6946b87ec86c565e2060aa5d23c02882cb22fd7a321b5e8cd0c8bdafe21968fcff406405707b988b753da21ecd190fe02329f1b569bfa74920cc0fa
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:47345 - 2004 "HINFO IN 1534063794210058555.795810395313342923. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.027727208s
	
	
	==> describe nodes <==
	Name:               pause-488160
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-488160
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=93e78e130b0f4e4236c23940daf4ba8b68c76662
	                    minikube.k8s.io/name=pause-488160
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_14T20_05_21_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 14 Oct 2025 20:05:17 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-488160
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 14 Oct 2025 20:07:14 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 14 Oct 2025 20:07:04 +0000   Tue, 14 Oct 2025 20:05:15 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 14 Oct 2025 20:07:04 +0000   Tue, 14 Oct 2025 20:05:15 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 14 Oct 2025 20:07:04 +0000   Tue, 14 Oct 2025 20:05:15 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 14 Oct 2025 20:07:04 +0000   Tue, 14 Oct 2025 20:05:22 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.36
	  Hostname:    pause-488160
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3042712Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3042712Ki
	  pods:               110
	System Info:
	  Machine ID:                 3c7caad6c2ea4bdba06f0d07a6cc85da
	  System UUID:                3c7caad6-c2ea-4bdb-a06f-0d07a6cc85da
	  Boot ID:                    6cf9d609-b86f-4f06-85a5-86f036ece3e6
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-mkw7n                100m (5%)     0 (0%)      70Mi (2%)        170Mi (5%)     114s
	  kube-system                 etcd-pause-488160                       100m (5%)     0 (0%)      100Mi (3%)       0 (0%)         119s
	  kube-system                 kube-apiserver-pause-488160             250m (12%)    0 (0%)      0 (0%)           0 (0%)         119s
	  kube-system                 kube-controller-manager-pause-488160    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m1s
	  kube-system                 kube-proxy-7g2cw                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         114s
	  kube-system                 kube-scheduler-pause-488160             100m (5%)     0 (0%)      0 (0%)           0 (0%)         119s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (5%)  170Mi (5%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 112s                 kube-proxy       
	  Normal  Starting                 13s                  kube-proxy       
	  Normal  Starting                 2m7s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m6s (x8 over 2m7s)  kubelet          Node pause-488160 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m6s (x8 over 2m7s)  kubelet          Node pause-488160 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m6s (x7 over 2m7s)  kubelet          Node pause-488160 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m6s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    119s                 kubelet          Node pause-488160 status is now: NodeHasNoDiskPressure
	  Normal  NodeAllocatableEnforced  119s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  119s                 kubelet          Node pause-488160 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     119s                 kubelet          Node pause-488160 status is now: NodeHasSufficientPID
	  Normal  Starting                 119s                 kubelet          Starting kubelet.
	  Normal  NodeReady                118s                 kubelet          Node pause-488160 status is now: NodeReady
	  Normal  RegisteredNode           115s                 node-controller  Node pause-488160 event: Registered Node pause-488160 in Controller
	  Normal  Starting                 21s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  20s (x8 over 20s)    kubelet          Node pause-488160 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    20s (x8 over 20s)    kubelet          Node pause-488160 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     20s (x7 over 20s)    kubelet          Node pause-488160 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  20s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           12s                  node-controller  Node pause-488160 event: Registered Node pause-488160 in Controller
	
	
	==> dmesg <==
	[Oct14 20:04] Booted with the nomodeset parameter. Only the system framebuffer will be available
	[  +0.000006] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.000062] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +0.002382] (rpcbind)[119]: rpcbind.service: Referenced but unset environment variable evaluates to an empty string: RPCBIND_OPTIONS
	[  +1.207091] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000017] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Oct14 20:05] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.123631] kauditd_printk_skb: 74 callbacks suppressed
	[  +0.637671] kauditd_printk_skb: 46 callbacks suppressed
	[  +0.148082] kauditd_printk_skb: 143 callbacks suppressed
	[  +1.259113] kauditd_printk_skb: 18 callbacks suppressed
	[Oct14 20:06] kauditd_printk_skb: 190 callbacks suppressed
	[  +2.744135] kauditd_printk_skb: 319 callbacks suppressed
	[Oct14 20:07] kauditd_printk_skb: 81 callbacks suppressed
	[  +9.551782] kauditd_printk_skb: 32 callbacks suppressed
	
	
	==> etcd [355aef60e9fa32ea21dd61248eccd4c25c356b1a1a7dfecaac9f92587a38363b] <==
	{"level":"warn","ts":"2025-10-14T20:07:06.750858Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"416.36827ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-14T20:07:06.750946Z","caller":"traceutil/trace.go:172","msg":"trace[1945244499] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:453; }","duration":"416.479108ms","start":"2025-10-14T20:07:06.334450Z","end":"2025-10-14T20:07:06.750929Z","steps":["trace[1945244499] 'agreement among raft nodes before linearized reading'  (duration: 201.635819ms)","trace[1945244499] 'range keys from in-memory index tree'  (duration: 214.711277ms)"],"step_count":2}
	{"level":"warn","ts":"2025-10-14T20:07:06.750984Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-10-14T20:07:06.334430Z","time spent":"416.544657ms","remote":"127.0.0.1:53556","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":28,"request content":"key:\"/registry/pods\" limit:1 "}
	{"level":"warn","ts":"2025-10-14T20:07:06.753001Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"215.190356ms","expected-duration":"100ms","prefix":"","request":"header:<ID:11334885043123316792 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/masterleases/192.168.50.36\" mod_revision:0 > success:<request_put:<key:\"/registry/masterleases/192.168.50.36\" value_size:66 lease:2111513006268540982 >> failure:<request_range:<key:\"/registry/masterleases/192.168.50.36\" > >>","response":"size:16"}
	{"level":"info","ts":"2025-10-14T20:07:06.753390Z","caller":"traceutil/trace.go:172","msg":"trace[1060290842] linearizableReadLoop","detail":"{readStateIndex:488; appliedIndex:487; }","duration":"217.236086ms","start":"2025-10-14T20:07:06.536061Z","end":"2025-10-14T20:07:06.753297Z","steps":["trace[1060290842] 'read index received'  (duration: 14.611µs)","trace[1060290842] 'applied index is now lower than readState.Index'  (duration: 217.220609ms)"],"step_count":2}
	{"level":"warn","ts":"2025-10-14T20:07:06.753905Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"267.187722ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/secrets\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-14T20:07:06.754012Z","caller":"traceutil/trace.go:172","msg":"trace[1674341272] range","detail":"{range_begin:/registry/secrets; range_end:; response_count:0; response_revision:454; }","duration":"267.392682ms","start":"2025-10-14T20:07:06.486557Z","end":"2025-10-14T20:07:06.753949Z","steps":["trace[1674341272] 'agreement among raft nodes before linearized reading'  (duration: 267.168491ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-14T20:07:06.755683Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"270.574982ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/persistent-volume-binder\" limit:1 ","response":"range_response_count:1 size:214"}
	{"level":"info","ts":"2025-10-14T20:07:06.755968Z","caller":"traceutil/trace.go:172","msg":"trace[457255671] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/persistent-volume-binder; range_end:; response_count:1; response_revision:454; }","duration":"270.857689ms","start":"2025-10-14T20:07:06.485098Z","end":"2025-10-14T20:07:06.755956Z","steps":["trace[457255671] 'agreement among raft nodes before linearized reading'  (duration: 270.343497ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-14T20:07:06.757221Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"266.249407ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-14T20:07:06.764645Z","caller":"traceutil/trace.go:172","msg":"trace[454865915] range","detail":"{range_begin:/registry/serviceaccounts; range_end:; response_count:0; response_revision:454; }","duration":"277.390789ms","start":"2025-10-14T20:07:06.487237Z","end":"2025-10-14T20:07:06.764628Z","steps":["trace[454865915] 'agreement among raft nodes before linearized reading'  (duration: 266.216103ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-14T20:07:06.764523Z","caller":"traceutil/trace.go:172","msg":"trace[579126946] transaction","detail":"{read_only:false; response_revision:454; number_of_response:1; }","duration":"776.306979ms","start":"2025-10-14T20:07:05.987952Z","end":"2025-10-14T20:07:06.764259Z","steps":["trace[579126946] 'process raft request'  (duration: 548.160958ms)","trace[579126946] 'compare'  (duration: 214.717429ms)"],"step_count":2}
	{"level":"warn","ts":"2025-10-14T20:07:06.770670Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-10-14T20:07:05.987929Z","time spent":"782.679207ms","remote":"127.0.0.1:53276","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":118,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/masterleases/192.168.50.36\" mod_revision:0 > success:<request_put:<key:\"/registry/masterleases/192.168.50.36\" value_size:66 lease:2111513006268540982 >> failure:<request_range:<key:\"/registry/masterleases/192.168.50.36\" > >"}
	{"level":"warn","ts":"2025-10-14T20:07:07.304669Z","caller":"etcdserver/v3_server.go:911","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":11334885043123316802,"retry-timeout":"500ms"}
	{"level":"info","ts":"2025-10-14T20:07:07.315323Z","caller":"traceutil/trace.go:172","msg":"trace[939364981] linearizableReadLoop","detail":"{readStateIndex:488; appliedIndex:488; }","duration":"511.340857ms","start":"2025-10-14T20:07:06.803880Z","end":"2025-10-14T20:07:07.315221Z","steps":["trace[939364981] 'read index received'  (duration: 511.33371ms)","trace[939364981] 'applied index is now lower than readState.Index'  (duration: 6.162µs)"],"step_count":2}
	{"level":"warn","ts":"2025-10-14T20:07:07.318806Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"514.907222ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/namespace-controller\" limit:1 ","response":"range_response_count:1 size:205"}
	{"level":"info","ts":"2025-10-14T20:07:07.318860Z","caller":"traceutil/trace.go:172","msg":"trace[613297594] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/namespace-controller; range_end:; response_count:1; response_revision:454; }","duration":"514.973846ms","start":"2025-10-14T20:07:06.803875Z","end":"2025-10-14T20:07:07.318849Z","steps":["trace[613297594] 'agreement among raft nodes before linearized reading'  (duration: 511.484249ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-14T20:07:07.318894Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-10-14T20:07:06.803855Z","time spent":"515.027557ms","remote":"127.0.0.1:53588","response type":"/etcdserverpb.KV/Range","request count":0,"request size":62,"response count":1,"response size":228,"request content":"key:\"/registry/serviceaccounts/kube-system/namespace-controller\" limit:1 "}
	{"level":"warn","ts":"2025-10-14T20:07:07.319123Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"185.834755ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 serializable:true keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-14T20:07:07.319207Z","caller":"traceutil/trace.go:172","msg":"trace[437794405] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:454; }","duration":"185.918638ms","start":"2025-10-14T20:07:07.133278Z","end":"2025-10-14T20:07:07.319197Z","steps":["trace[437794405] 'range keys from in-memory index tree'  (duration: 185.766869ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-14T20:07:07.319714Z","caller":"traceutil/trace.go:172","msg":"trace[1509350123] transaction","detail":"{read_only:false; number_of_response:0; response_revision:455; }","duration":"445.745167ms","start":"2025-10-14T20:07:06.873961Z","end":"2025-10-14T20:07:07.319706Z","steps":["trace[1509350123] 'process raft request'  (duration: 445.7164ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-14T20:07:07.319775Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-10-14T20:07:06.873890Z","time spent":"445.850169ms","remote":"127.0.0.1:53956","response type":"/etcdserverpb.KV/Txn","request count":0,"request size":0,"response count":0,"response size":28,"request content":"compare:<target:MOD key:\"/registry/clusterrolebindings/kubeadm:cluster-admins\" mod_revision:0 > success:<request_put:<key:\"/registry/clusterrolebindings/kubeadm:cluster-admins\" value_size:375 >> failure:<>"}
	{"level":"info","ts":"2025-10-14T20:07:07.321003Z","caller":"traceutil/trace.go:172","msg":"trace[889367425] transaction","detail":"{read_only:false; response_revision:455; number_of_response:1; }","duration":"521.330571ms","start":"2025-10-14T20:07:06.799660Z","end":"2025-10-14T20:07:07.320991Z","steps":["trace[889367425] 'process raft request'  (duration: 515.592218ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-14T20:07:07.321349Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-10-14T20:07:06.799591Z","time spent":"521.662073ms","remote":"127.0.0.1:53556","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":6057,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/pods/kube-system/etcd-pause-488160\" mod_revision:449 > success:<request_put:<key:\"/registry/pods/kube-system/etcd-pause-488160\" value_size:6005 >> failure:<request_range:<key:\"/registry/pods/kube-system/etcd-pause-488160\" > >"}
	{"level":"warn","ts":"2025-10-14T20:07:07.321481Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-10-14T20:07:06.840455Z","time spent":"481.024294ms","remote":"127.0.0.1:54308","response type":"/etcdserverpb.Lease/LeaseGrant","request count":-1,"request size":-1,"response count":-1,"response size":-1,"request content":""}
	
	
	==> etcd [eabddd5066982ea30ce9d40af6c1e8f8f30f31bb4d6ead7cdf79191b79047493] <==
	{"level":"info","ts":"2025-10-14T20:06:45.947006Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-10-14T20:06:45.953979Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-10-14T20:06:45.954264Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"warn","ts":"2025-10-14T20:06:45.954938Z","caller":"v3rpc/grpc.go:52","msg":"etcdserver: failed to register grpc metrics","error":"duplicate metrics collector registration attempted"}
	{"level":"info","ts":"2025-10-14T20:06:45.974444Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-10-14T20:06:45.999281Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-10-14T20:06:46.019385Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.50.36:2379"}
	{"level":"info","ts":"2025-10-14T20:06:46.312266Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-10-14T20:06:46.312355Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"pause-488160","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.50.36:2380"],"advertise-client-urls":["https://192.168.50.36:2379"]}
	{"level":"error","ts":"2025-10-14T20:06:46.312445Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-10-14T20:06:46.312529Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"warn","ts":"2025-10-14T20:06:46.315332Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-10-14T20:06:46.315388Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-10-14T20:06:46.315410Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"error","ts":"2025-10-14T20:06:46.315461Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-10-14T20:06:46.318258Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.50.36:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-10-14T20:06:46.318371Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.50.36:2379: use of closed network connection"}
	{"level":"error","ts":"2025-10-14T20:06:46.318397Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.50.36:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-14T20:06:46.318257Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"e5487579cc149d4d","current-leader-member-id":"e5487579cc149d4d"}
	{"level":"info","ts":"2025-10-14T20:06:46.318484Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"info","ts":"2025-10-14T20:06:46.318493Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"info","ts":"2025-10-14T20:06:46.331584Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.50.36:2380"}
	{"level":"error","ts":"2025-10-14T20:06:46.331674Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.50.36:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-14T20:06:46.331740Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.50.36:2380"}
	{"level":"info","ts":"2025-10-14T20:06:46.331773Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"pause-488160","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.50.36:2380"],"advertise-client-urls":["https://192.168.50.36:2379"]}
	
	
	==> kernel <==
	 20:07:20 up 2 min,  0 users,  load average: 1.30, 0.55, 0.21
	Linux pause-488160 6.6.95 #1 SMP PREEMPT_DYNAMIC Thu Sep 18 15:48:18 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [69d4f2f9551e1d08ed65e858819057880db670c17f70d9cbd4a714290c09867c] <==
	I1014 20:07:04.545722       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1014 20:07:04.545918       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1014 20:07:04.547729       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1014 20:07:04.547782       1 aggregator.go:171] initial CRD sync complete...
	I1014 20:07:04.547794       1 autoregister_controller.go:144] Starting autoregister controller
	I1014 20:07:04.547802       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1014 20:07:04.547806       1 cache.go:39] Caches are synced for autoregister controller
	I1014 20:07:04.548415       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1014 20:07:04.548508       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1014 20:07:04.556631       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1014 20:07:04.556745       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	E1014 20:07:04.556952       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1014 20:07:04.577307       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1014 20:07:04.577348       1 policy_source.go:240] refreshing policies
	I1014 20:07:04.588808       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1014 20:07:04.593406       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1014 20:07:05.088682       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1014 20:07:05.349215       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1014 20:07:07.541676       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1014 20:07:07.616755       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1014 20:07:07.649237       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1014 20:07:07.656817       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1014 20:07:08.850044       1 controller.go:667] quota admission added evaluator for: endpoints
	I1014 20:07:08.902454       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1014 20:07:08.950609       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-apiserver [97414ca2b92dc105ff43b42a93d387443e43583ee0b56d5f09cdbbe38538a4d1] <==
	W1014 20:06:46.521412       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=certificates.k8s.io/v1alpha1
	W1014 20:06:46.521428       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=node.k8s.io/v1alpha1
	W1014 20:06:46.570252       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1014 20:06:46.570433       1 logging.go:55] [core] [Channel #2 SubChannel #5]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	I1014 20:06:46.572194       1 shared_informer.go:349] "Waiting for caches to sync" controller="node_authorizer"
	W1014 20:06:46.573309       1 logging.go:55] [core] [Channel #2 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1014 20:06:46.573496       1 logging.go:55] [core] [Channel #1 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	I1014 20:06:46.579676       1 shared_informer.go:349] "Waiting for caches to sync" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1014 20:06:46.597566       1 plugins.go:157] Loaded 14 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,PodTopologyLabels,MutatingAdmissionPolicy,MutatingAdmissionWebhook.
	I1014 20:06:46.597622       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I1014 20:06:46.598037       1 instance.go:239] Using reconciler: lease
	W1014 20:06:46.608215       1 logging.go:55] [core] [Channel #7 SubChannel #8]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1014 20:06:46.608937       1 logging.go:55] [core] [Channel #7 SubChannel #9]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1014 20:06:47.574813       1 logging.go:55] [core] [Channel #1 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1014 20:06:47.574932       1 logging.go:55] [core] [Channel #2 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1014 20:06:47.610215       1 logging.go:55] [core] [Channel #7 SubChannel #9]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1014 20:06:49.056612       1 logging.go:55] [core] [Channel #1 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1014 20:06:49.149015       1 logging.go:55] [core] [Channel #7 SubChannel #9]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1014 20:06:49.268528       1 logging.go:55] [core] [Channel #2 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1014 20:06:51.437492       1 logging.go:55] [core] [Channel #2 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1014 20:06:51.663597       1 logging.go:55] [core] [Channel #7 SubChannel #9]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1014 20:06:51.785024       1 logging.go:55] [core] [Channel #1 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1014 20:06:55.376479       1 logging.go:55] [core] [Channel #7 SubChannel #9]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1014 20:06:55.619520       1 logging.go:55] [core] [Channel #2 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1014 20:06:56.325534       1 logging.go:55] [core] [Channel #1 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [4a64743e6066f645414b054d1b53606b4711f9fa27a608e5aa0d73868276377d] <==
	I1014 20:06:46.642116       1 serving.go:386] Generated self-signed cert in-memory
	I1014 20:06:47.382064       1 controllermanager.go:191] "Starting" version="v1.34.1"
	I1014 20:06:47.382212       1 controllermanager.go:193] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1014 20:06:47.383897       1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1014 20:06:47.384073       1 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1014 20:06:47.384623       1 secure_serving.go:211] Serving securely on 127.0.0.1:10257
	I1014 20:06:47.384743       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	
	
	==> kube-controller-manager [5ec14772e0933d1d2be144b5392733a04f3da0591a5c769db30f6a953353409f] <==
	I1014 20:07:08.584358       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1014 20:07:08.584634       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1014 20:07:08.588469       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1014 20:07:08.592731       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1014 20:07:08.595251       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1014 20:07:08.597382       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1014 20:07:08.597736       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1014 20:07:08.597828       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1014 20:07:08.597879       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1014 20:07:08.597947       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1014 20:07:08.597996       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1014 20:07:08.598049       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1014 20:07:08.599285       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1014 20:07:08.599441       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1014 20:07:08.601427       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1014 20:07:08.606396       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1014 20:07:08.606882       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1014 20:07:08.608600       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1014 20:07:08.612640       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1014 20:07:08.612819       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1014 20:07:08.618606       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1014 20:07:08.618830       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1014 20:07:08.664434       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1014 20:07:08.664552       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1014 20:07:08.664576       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	
	
	==> kube-proxy [4e90ac414d78a1f7ee0f90c04a73687faad5480589d8567f5e6c48bdef43a6a4] <==
	I1014 20:07:06.635376       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1014 20:07:06.736001       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1014 20:07:06.736045       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.50.36"]
	E1014 20:07:06.736244       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1014 20:07:06.793950       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1014 20:07:06.794392       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1014 20:07:06.794737       1 server_linux.go:132] "Using iptables Proxier"
	I1014 20:07:06.817942       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1014 20:07:06.818847       1 server.go:527] "Version info" version="v1.34.1"
	I1014 20:07:06.818959       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1014 20:07:06.825892       1 config.go:200] "Starting service config controller"
	I1014 20:07:06.825925       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1014 20:07:06.825947       1 config.go:106] "Starting endpoint slice config controller"
	I1014 20:07:06.825952       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1014 20:07:06.825966       1 config.go:403] "Starting serviceCIDR config controller"
	I1014 20:07:06.825971       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1014 20:07:06.826648       1 config.go:309] "Starting node config controller"
	I1014 20:07:06.826681       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1014 20:07:06.826690       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1014 20:07:06.926670       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1014 20:07:06.926704       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1014 20:07:06.926722       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-proxy [b1c20d90de825fa8bd5bbbc15a4636af8994f83f87b5182c20f342f198d2347f] <==
	I1014 20:06:46.050626       1 server_linux.go:53] "Using iptables proxy"
	
	
	==> kube-scheduler [00096d625a013fe1fa67e0339e9549ec14f0e749605b186df08a717f499dba2f] <==
	I1014 20:07:02.983636       1 serving.go:386] Generated self-signed cert in-memory
	I1014 20:07:04.610584       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1014 20:07:04.610627       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1014 20:07:04.617510       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1014 20:07:04.617590       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1014 20:07:04.618464       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1014 20:07:04.617572       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1014 20:07:04.618583       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1014 20:07:04.617635       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1014 20:07:04.620402       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1014 20:07:04.617647       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1014 20:07:04.718726       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1014 20:07:04.718726       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1014 20:07:04.721290       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	
	
	==> kube-scheduler [cbfcc79a2721f61b4d433eece506d808e9fafd4dcf77ac2f88cd0f8acd1cb1df] <==
	I1014 20:06:46.814744       1 serving.go:386] Generated self-signed cert in-memory
	W1014 20:06:57.525605       1 authentication.go:397] Error looking up in-cluster authentication configuration: Get "https://192.168.50.36:8443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp 192.168.50.36:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.50.36:41816->192.168.50.36:8443: read: connection reset by peer
	W1014 20:06:57.525668       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1014 20:06:57.525677       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1014 20:06:57.536980       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1014 20:06:57.537018       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	E1014 20:06:57.537033       1 event.go:401] "Unable start event watcher (will not retry!)" err="broadcaster already stopped"
	I1014 20:06:57.538985       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1014 20:06:57.539014       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1014 20:06:57.539089       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	E1014 20:06:57.539184       1 server.go:286] "handlers are not fully synchronized" err="context canceled"
	E1014 20:06:57.539291       1 shared_informer.go:352] "Unable to sync caches" logger="UnhandledError" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1014 20:06:57.539302       1 configmap_cafile_content.go:213] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1014 20:06:57.539317       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1014 20:06:57.539324       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1014 20:06:57.539394       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1014 20:06:57.539431       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1014 20:06:57.539437       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1014 20:06:57.539452       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Oct 14 20:07:03 pause-488160 kubelet[3656]: E1014 20:07:03.250452    3656 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"pause-488160\" not found" node="pause-488160"
	Oct 14 20:07:04 pause-488160 kubelet[3656]: E1014 20:07:04.247966    3656 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"pause-488160\" not found" node="pause-488160"
	Oct 14 20:07:04 pause-488160 kubelet[3656]: E1014 20:07:04.333628    3656 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"pause-488160\" not found" node="pause-488160"
	Oct 14 20:07:04 pause-488160 kubelet[3656]: I1014 20:07:04.474235    3656 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-pause-488160"
	Oct 14 20:07:04 pause-488160 kubelet[3656]: E1014 20:07:04.605499    3656 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-pause-488160\" already exists" pod="kube-system/kube-apiserver-pause-488160"
	Oct 14 20:07:04 pause-488160 kubelet[3656]: I1014 20:07:04.605702    3656 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-pause-488160"
	Oct 14 20:07:04 pause-488160 kubelet[3656]: E1014 20:07:04.617721    3656 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-pause-488160\" already exists" pod="kube-system/kube-controller-manager-pause-488160"
	Oct 14 20:07:04 pause-488160 kubelet[3656]: I1014 20:07:04.617751    3656 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-pause-488160"
	Oct 14 20:07:04 pause-488160 kubelet[3656]: E1014 20:07:04.644476    3656 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-pause-488160\" already exists" pod="kube-system/kube-scheduler-pause-488160"
	Oct 14 20:07:04 pause-488160 kubelet[3656]: I1014 20:07:04.644623    3656 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/etcd-pause-488160"
	Oct 14 20:07:04 pause-488160 kubelet[3656]: E1014 20:07:04.656940    3656 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-pause-488160\" already exists" pod="kube-system/etcd-pause-488160"
	Oct 14 20:07:04 pause-488160 kubelet[3656]: I1014 20:07:04.685417    3656 kubelet_node_status.go:124] "Node was previously registered" node="pause-488160"
	Oct 14 20:07:04 pause-488160 kubelet[3656]: I1014 20:07:04.685670    3656 kubelet_node_status.go:78] "Successfully registered node" node="pause-488160"
	Oct 14 20:07:04 pause-488160 kubelet[3656]: I1014 20:07:04.685708    3656 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Oct 14 20:07:04 pause-488160 kubelet[3656]: I1014 20:07:04.687747    3656 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Oct 14 20:07:04 pause-488160 kubelet[3656]: I1014 20:07:04.941877    3656 apiserver.go:52] "Watching apiserver"
	Oct 14 20:07:04 pause-488160 kubelet[3656]: I1014 20:07:04.992011    3656 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Oct 14 20:07:05 pause-488160 kubelet[3656]: I1014 20:07:05.085832    3656 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4d4af20d-b366-4ed8-a198-6aff03448749-xtables-lock\") pod \"kube-proxy-7g2cw\" (UID: \"4d4af20d-b366-4ed8-a198-6aff03448749\") " pod="kube-system/kube-proxy-7g2cw"
	Oct 14 20:07:05 pause-488160 kubelet[3656]: I1014 20:07:05.087315    3656 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4d4af20d-b366-4ed8-a198-6aff03448749-lib-modules\") pod \"kube-proxy-7g2cw\" (UID: \"4d4af20d-b366-4ed8-a198-6aff03448749\") " pod="kube-system/kube-proxy-7g2cw"
	Oct 14 20:07:05 pause-488160 kubelet[3656]: I1014 20:07:05.247829    3656 scope.go:117] "RemoveContainer" containerID="a7673602dd13a26d88d5b8cc01cb5805084d98654c1181e0ca52246c245d6871"
	Oct 14 20:07:05 pause-488160 kubelet[3656]: I1014 20:07:05.249310    3656 scope.go:117] "RemoveContainer" containerID="b1c20d90de825fa8bd5bbbc15a4636af8994f83f87b5182c20f342f198d2347f"
	Oct 14 20:07:10 pause-488160 kubelet[3656]: E1014 20:07:10.182481    3656 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1760472430181951440  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:127412}  inodes_used:{value:57}}"
	Oct 14 20:07:10 pause-488160 kubelet[3656]: E1014 20:07:10.182512    3656 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1760472430181951440  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:127412}  inodes_used:{value:57}}"
	Oct 14 20:07:20 pause-488160 kubelet[3656]: E1014 20:07:20.184338    3656 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1760472440183672617  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:127412}  inodes_used:{value:57}}"
	Oct 14 20:07:20 pause-488160 kubelet[3656]: E1014 20:07:20.184390    3656 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1760472440183672617  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:127412}  inodes_used:{value:57}}"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-488160 -n pause-488160
helpers_test.go:269: (dbg) Run:  kubectl --context pause-488160 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-488160 -n pause-488160
helpers_test.go:252: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p pause-488160 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p pause-488160 logs -n 25: (1.482450182s)
helpers_test.go:260: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                ARGS                                                                                │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ delete  │ -p scheduled-stop-464504                                                                                                                                           │ scheduled-stop-464504     │ jenkins │ v1.37.0 │ 14 Oct 25 20:04 UTC │ 14 Oct 25 20:04 UTC │
	│ start   │ -p pause-488160 --memory=3072 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                │ pause-488160              │ jenkins │ v1.37.0 │ 14 Oct 25 20:04 UTC │ 14 Oct 25 20:06 UTC │
	│ start   │ -p NoKubernetes-280962 --no-kubernetes --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                            │ NoKubernetes-280962       │ jenkins │ v1.37.0 │ 14 Oct 25 20:04 UTC │                     │
	│ start   │ -p offline-crio-270302 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                        │ offline-crio-270302       │ jenkins │ v1.37.0 │ 14 Oct 25 20:04 UTC │ 14 Oct 25 20:05 UTC │
	│ start   │ -p NoKubernetes-280962 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                    │ NoKubernetes-280962       │ jenkins │ v1.37.0 │ 14 Oct 25 20:04 UTC │ 14 Oct 25 20:05 UTC │
	│ start   │ -p running-upgrade-370635 --memory=3072 --vm-driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                                     │ running-upgrade-370635    │ jenkins │ v1.32.0 │ 14 Oct 25 20:04 UTC │ 14 Oct 25 20:06 UTC │
	│ start   │ -p NoKubernetes-280962 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                    │ NoKubernetes-280962       │ jenkins │ v1.37.0 │ 14 Oct 25 20:05 UTC │ 14 Oct 25 20:05 UTC │
	│ delete  │ -p offline-crio-270302                                                                                                                                             │ offline-crio-270302       │ jenkins │ v1.37.0 │ 14 Oct 25 20:05 UTC │ 14 Oct 25 20:05 UTC │
	│ start   │ -p kubernetes-upgrade-425560 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false │ kubernetes-upgrade-425560 │ jenkins │ v1.37.0 │ 14 Oct 25 20:05 UTC │ 14 Oct 25 20:06 UTC │
	│ delete  │ -p NoKubernetes-280962                                                                                                                                             │ NoKubernetes-280962       │ jenkins │ v1.37.0 │ 14 Oct 25 20:05 UTC │ 14 Oct 25 20:05 UTC │
	│ start   │ -p NoKubernetes-280962 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                    │ NoKubernetes-280962       │ jenkins │ v1.37.0 │ 14 Oct 25 20:05 UTC │ 14 Oct 25 20:06 UTC │
	│ start   │ -p running-upgrade-370635 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                 │ running-upgrade-370635    │ jenkins │ v1.37.0 │ 14 Oct 25 20:06 UTC │ 14 Oct 25 20:06 UTC │
	│ start   │ -p pause-488160 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                                         │ pause-488160              │ jenkins │ v1.37.0 │ 14 Oct 25 20:06 UTC │ 14 Oct 25 20:07 UTC │
	│ stop    │ -p kubernetes-upgrade-425560                                                                                                                                       │ kubernetes-upgrade-425560 │ jenkins │ v1.37.0 │ 14 Oct 25 20:06 UTC │ 14 Oct 25 20:06 UTC │
	│ ssh     │ -p NoKubernetes-280962 sudo systemctl is-active --quiet service kubelet                                                                                            │ NoKubernetes-280962       │ jenkins │ v1.37.0 │ 14 Oct 25 20:06 UTC │                     │
	│ start   │ -p kubernetes-upgrade-425560 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false │ kubernetes-upgrade-425560 │ jenkins │ v1.37.0 │ 14 Oct 25 20:06 UTC │ 14 Oct 25 20:07 UTC │
	│ stop    │ -p NoKubernetes-280962                                                                                                                                             │ NoKubernetes-280962       │ jenkins │ v1.37.0 │ 14 Oct 25 20:06 UTC │ 14 Oct 25 20:06 UTC │
	│ start   │ -p NoKubernetes-280962 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                                                         │ NoKubernetes-280962       │ jenkins │ v1.37.0 │ 14 Oct 25 20:06 UTC │ 14 Oct 25 20:07 UTC │
	│ mount   │ /home/jenkins:/minikube-host --profile running-upgrade-370635 --v 0 --9p-version 9p2000.L --gid docker --ip  --msize 262144 --port 0 --type 9p --uid docker        │ running-upgrade-370635    │ jenkins │ v1.37.0 │ 14 Oct 25 20:06 UTC │                     │
	│ delete  │ -p running-upgrade-370635                                                                                                                                          │ running-upgrade-370635    │ jenkins │ v1.37.0 │ 14 Oct 25 20:06 UTC │ 14 Oct 25 20:06 UTC │
	│ start   │ -p force-systemd-env-702842 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                               │ force-systemd-env-702842  │ jenkins │ v1.37.0 │ 14 Oct 25 20:06 UTC │                     │
	│ start   │ -p kubernetes-upgrade-425560 --memory=3072 --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                        │ kubernetes-upgrade-425560 │ jenkins │ v1.37.0 │ 14 Oct 25 20:07 UTC │                     │
	│ start   │ -p kubernetes-upgrade-425560 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false │ kubernetes-upgrade-425560 │ jenkins │ v1.37.0 │ 14 Oct 25 20:07 UTC │                     │
	│ ssh     │ -p NoKubernetes-280962 sudo systemctl is-active --quiet service kubelet                                                                                            │ NoKubernetes-280962       │ jenkins │ v1.37.0 │ 14 Oct 25 20:07 UTC │                     │
	│ delete  │ -p NoKubernetes-280962                                                                                                                                             │ NoKubernetes-280962       │ jenkins │ v1.37.0 │ 14 Oct 25 20:07 UTC │ 14 Oct 25 20:07 UTC │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/14 20:07:14
	Running on machine: ubuntu-20-agent-6
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1014 20:07:14.503685  403592 out.go:360] Setting OutFile to fd 1 ...
	I1014 20:07:14.504033  403592 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1014 20:07:14.504075  403592 out.go:374] Setting ErrFile to fd 2...
	I1014 20:07:14.504160  403592 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1014 20:07:14.504717  403592 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-364627/.minikube/bin
	I1014 20:07:14.505826  403592 out.go:368] Setting JSON to false
	I1014 20:07:14.507104  403592 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":6577,"bootTime":1760465857,"procs":202,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1014 20:07:14.507204  403592 start.go:141] virtualization: kvm guest
	I1014 20:07:14.508704  403592 out.go:179] * [kubernetes-upgrade-425560] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1014 20:07:14.510111  403592 notify.go:220] Checking for updates...
	I1014 20:07:14.510127  403592 out.go:179]   - MINIKUBE_LOCATION=21409
	I1014 20:07:14.511440  403592 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1014 20:07:14.512665  403592 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21409-364627/kubeconfig
	I1014 20:07:14.513918  403592 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21409-364627/.minikube
	I1014 20:07:14.516456  403592 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1014 20:07:14.517621  403592 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1014 20:07:14.519281  403592 config.go:182] Loaded profile config "kubernetes-upgrade-425560": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1014 20:07:14.519873  403592 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 20:07:14.519933  403592 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 20:07:14.535685  403592 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39081
	I1014 20:07:14.536234  403592 main.go:141] libmachine: () Calling .GetVersion
	I1014 20:07:14.536851  403592 main.go:141] libmachine: Using API Version  1
	I1014 20:07:14.536879  403592 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 20:07:14.537217  403592 main.go:141] libmachine: () Calling .GetMachineName
	I1014 20:07:14.537389  403592 main.go:141] libmachine: (kubernetes-upgrade-425560) Calling .DriverName
	I1014 20:07:14.537694  403592 driver.go:421] Setting default libvirt URI to qemu:///system
	I1014 20:07:14.537996  403592 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 20:07:14.538040  403592 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 20:07:14.553302  403592 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38665
	I1014 20:07:14.553858  403592 main.go:141] libmachine: () Calling .GetVersion
	I1014 20:07:14.554409  403592 main.go:141] libmachine: Using API Version  1
	I1014 20:07:14.554431  403592 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 20:07:14.554760  403592 main.go:141] libmachine: () Calling .GetMachineName
	I1014 20:07:14.554999  403592 main.go:141] libmachine: (kubernetes-upgrade-425560) Calling .DriverName
	I1014 20:07:14.597449  403592 out.go:179] * Using the kvm2 driver based on existing profile
	I1014 20:07:14.598556  403592 start.go:305] selected driver: kvm2
	I1014 20:07:14.598580  403592 start.go:925] validating driver "kvm2" against &{Name:kubernetes-upgrade-425560 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.34.1 ClusterName:kubernetes-upgrade-425560 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.83.247 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bi
naryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1014 20:07:14.598725  403592 start.go:936] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1014 20:07:14.599477  403592 install.go:66] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1014 20:07:14.599605  403592 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21409-364627/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1014 20:07:14.617491  403592 install.go:163] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I1014 20:07:14.617538  403592 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21409-364627/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1014 20:07:14.634974  403592 install.go:163] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I1014 20:07:14.635406  403592 cni.go:84] Creating CNI manager for ""
	I1014 20:07:14.635464  403592 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1014 20:07:14.635501  403592 start.go:349] cluster config:
	{Name:kubernetes-upgrade-425560 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:kubernetes-upgrade-425560 Namespace:d
efault APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.83.247 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:
false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1014 20:07:14.635611  403592 iso.go:125] acquiring lock: {Name:mk340b9c7aeea9c9c1eaad0b6560c7e40df5ab00 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1014 20:07:14.637656  403592 out.go:179] * Starting "kubernetes-upgrade-425560" primary control-plane node in "kubernetes-upgrade-425560" cluster
	I1014 20:07:15.761944  403305 start.go:364] duration metric: took 25.249687053s to acquireMachinesLock for "force-systemd-env-702842"
	I1014 20:07:15.762045  403305 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-702842 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.34.1 ClusterName:force-systemd-env-702842 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Di
sableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1014 20:07:15.762219  403305 start.go:125] createHost starting for "" (driver="kvm2")
	I1014 20:07:14.410649  403014 main.go:141] libmachine: (NoKubernetes-280962) DBG | Getting to WaitForSSH function...
	I1014 20:07:14.414927  403014 main.go:141] libmachine: (NoKubernetes-280962) DBG | domain NoKubernetes-280962 has defined MAC address 52:54:00:bc:57:d8 in network mk-NoKubernetes-280962
	I1014 20:07:14.415553  403014 main.go:141] libmachine: (NoKubernetes-280962) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:57:d8", ip: ""} in network mk-NoKubernetes-280962: {Iface:virbr3 ExpiryTime:2025-10-14 21:07:11 +0000 UTC Type:0 Mac:52:54:00:bc:57:d8 Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:nokubernetes-280962 Clientid:01:52:54:00:bc:57:d8}
	I1014 20:07:14.415573  403014 main.go:141] libmachine: (NoKubernetes-280962) DBG | domain NoKubernetes-280962 has defined IP address 192.168.39.169 and MAC address 52:54:00:bc:57:d8 in network mk-NoKubernetes-280962
	I1014 20:07:14.415923  403014 main.go:141] libmachine: (NoKubernetes-280962) DBG | Using SSH client type: external
	I1014 20:07:14.415942  403014 main.go:141] libmachine: (NoKubernetes-280962) DBG | Using SSH private key: /home/jenkins/minikube-integration/21409-364627/.minikube/machines/NoKubernetes-280962/id_rsa (-rw-------)
	I1014 20:07:14.415969  403014 main.go:141] libmachine: (NoKubernetes-280962) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.169 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/21409-364627/.minikube/machines/NoKubernetes-280962/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1014 20:07:14.415977  403014 main.go:141] libmachine: (NoKubernetes-280962) DBG | About to run SSH command:
	I1014 20:07:14.415987  403014 main.go:141] libmachine: (NoKubernetes-280962) DBG | exit 0
	I1014 20:07:14.562668  403014 main.go:141] libmachine: (NoKubernetes-280962) DBG | SSH cmd err, output: <nil>: 
	I1014 20:07:14.562925  403014 main.go:141] libmachine: (NoKubernetes-280962) Calling .GetConfigRaw
	I1014 20:07:14.563711  403014 main.go:141] libmachine: (NoKubernetes-280962) Calling .GetIP
	I1014 20:07:14.566773  403014 main.go:141] libmachine: (NoKubernetes-280962) DBG | domain NoKubernetes-280962 has defined MAC address 52:54:00:bc:57:d8 in network mk-NoKubernetes-280962
	I1014 20:07:14.567365  403014 main.go:141] libmachine: (NoKubernetes-280962) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:57:d8", ip: ""} in network mk-NoKubernetes-280962: {Iface:virbr3 ExpiryTime:2025-10-14 21:07:11 +0000 UTC Type:0 Mac:52:54:00:bc:57:d8 Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:nokubernetes-280962 Clientid:01:52:54:00:bc:57:d8}
	I1014 20:07:14.567413  403014 main.go:141] libmachine: (NoKubernetes-280962) DBG | domain NoKubernetes-280962 has defined IP address 192.168.39.169 and MAC address 52:54:00:bc:57:d8 in network mk-NoKubernetes-280962
	I1014 20:07:14.567679  403014 profile.go:143] Saving config to /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/NoKubernetes-280962/config.json ...
	I1014 20:07:14.567932  403014 machine.go:93] provisionDockerMachine start ...
	I1014 20:07:14.567951  403014 main.go:141] libmachine: (NoKubernetes-280962) Calling .DriverName
	I1014 20:07:14.568231  403014 main.go:141] libmachine: (NoKubernetes-280962) Calling .GetSSHHostname
	I1014 20:07:14.571520  403014 main.go:141] libmachine: (NoKubernetes-280962) DBG | domain NoKubernetes-280962 has defined MAC address 52:54:00:bc:57:d8 in network mk-NoKubernetes-280962
	I1014 20:07:14.573819  403014 main.go:141] libmachine: (NoKubernetes-280962) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:57:d8", ip: ""} in network mk-NoKubernetes-280962: {Iface:virbr3 ExpiryTime:2025-10-14 21:07:11 +0000 UTC Type:0 Mac:52:54:00:bc:57:d8 Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:nokubernetes-280962 Clientid:01:52:54:00:bc:57:d8}
	I1014 20:07:14.573846  403014 main.go:141] libmachine: (NoKubernetes-280962) DBG | domain NoKubernetes-280962 has defined IP address 192.168.39.169 and MAC address 52:54:00:bc:57:d8 in network mk-NoKubernetes-280962
	I1014 20:07:14.574152  403014 main.go:141] libmachine: (NoKubernetes-280962) Calling .GetSSHPort
	I1014 20:07:14.574375  403014 main.go:141] libmachine: (NoKubernetes-280962) Calling .GetSSHKeyPath
	I1014 20:07:14.574562  403014 main.go:141] libmachine: (NoKubernetes-280962) Calling .GetSSHKeyPath
	I1014 20:07:14.574700  403014 main.go:141] libmachine: (NoKubernetes-280962) Calling .GetSSHUsername
	I1014 20:07:14.574897  403014 main.go:141] libmachine: Using SSH client type: native
	I1014 20:07:14.575125  403014 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.169 22 <nil> <nil>}
	I1014 20:07:14.575130  403014 main.go:141] libmachine: About to run SSH command:
	hostname
	I1014 20:07:14.698911  403014 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1014 20:07:14.698937  403014 main.go:141] libmachine: (NoKubernetes-280962) Calling .GetMachineName
	I1014 20:07:14.699272  403014 buildroot.go:166] provisioning hostname "NoKubernetes-280962"
	I1014 20:07:14.699301  403014 main.go:141] libmachine: (NoKubernetes-280962) Calling .GetMachineName
	I1014 20:07:14.699584  403014 main.go:141] libmachine: (NoKubernetes-280962) Calling .GetSSHHostname
	I1014 20:07:14.703356  403014 main.go:141] libmachine: (NoKubernetes-280962) DBG | domain NoKubernetes-280962 has defined MAC address 52:54:00:bc:57:d8 in network mk-NoKubernetes-280962
	I1014 20:07:14.703917  403014 main.go:141] libmachine: (NoKubernetes-280962) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:57:d8", ip: ""} in network mk-NoKubernetes-280962: {Iface:virbr3 ExpiryTime:2025-10-14 21:07:11 +0000 UTC Type:0 Mac:52:54:00:bc:57:d8 Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:nokubernetes-280962 Clientid:01:52:54:00:bc:57:d8}
	I1014 20:07:14.703938  403014 main.go:141] libmachine: (NoKubernetes-280962) DBG | domain NoKubernetes-280962 has defined IP address 192.168.39.169 and MAC address 52:54:00:bc:57:d8 in network mk-NoKubernetes-280962
	I1014 20:07:14.704102  403014 main.go:141] libmachine: (NoKubernetes-280962) Calling .GetSSHPort
	I1014 20:07:14.704361  403014 main.go:141] libmachine: (NoKubernetes-280962) Calling .GetSSHKeyPath
	I1014 20:07:14.704559  403014 main.go:141] libmachine: (NoKubernetes-280962) Calling .GetSSHKeyPath
	I1014 20:07:14.704712  403014 main.go:141] libmachine: (NoKubernetes-280962) Calling .GetSSHUsername
	I1014 20:07:14.704886  403014 main.go:141] libmachine: Using SSH client type: native
	I1014 20:07:14.705090  403014 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.169 22 <nil> <nil>}
	I1014 20:07:14.705096  403014 main.go:141] libmachine: About to run SSH command:
	sudo hostname NoKubernetes-280962 && echo "NoKubernetes-280962" | sudo tee /etc/hostname
	I1014 20:07:14.841172  403014 main.go:141] libmachine: SSH cmd err, output: <nil>: NoKubernetes-280962
	
	I1014 20:07:14.841201  403014 main.go:141] libmachine: (NoKubernetes-280962) Calling .GetSSHHostname
	I1014 20:07:14.845015  403014 main.go:141] libmachine: (NoKubernetes-280962) DBG | domain NoKubernetes-280962 has defined MAC address 52:54:00:bc:57:d8 in network mk-NoKubernetes-280962
	I1014 20:07:14.845403  403014 main.go:141] libmachine: (NoKubernetes-280962) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:57:d8", ip: ""} in network mk-NoKubernetes-280962: {Iface:virbr3 ExpiryTime:2025-10-14 21:07:11 +0000 UTC Type:0 Mac:52:54:00:bc:57:d8 Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:nokubernetes-280962 Clientid:01:52:54:00:bc:57:d8}
	I1014 20:07:14.845431  403014 main.go:141] libmachine: (NoKubernetes-280962) DBG | domain NoKubernetes-280962 has defined IP address 192.168.39.169 and MAC address 52:54:00:bc:57:d8 in network mk-NoKubernetes-280962
	I1014 20:07:14.845659  403014 main.go:141] libmachine: (NoKubernetes-280962) Calling .GetSSHPort
	I1014 20:07:14.845896  403014 main.go:141] libmachine: (NoKubernetes-280962) Calling .GetSSHKeyPath
	I1014 20:07:14.846081  403014 main.go:141] libmachine: (NoKubernetes-280962) Calling .GetSSHKeyPath
	I1014 20:07:14.846228  403014 main.go:141] libmachine: (NoKubernetes-280962) Calling .GetSSHUsername
	I1014 20:07:14.846440  403014 main.go:141] libmachine: Using SSH client type: native
	I1014 20:07:14.846654  403014 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.169 22 <nil> <nil>}
	I1014 20:07:14.846665  403014 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sNoKubernetes-280962' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 NoKubernetes-280962/g' /etc/hosts;
				else 
					echo '127.0.1.1 NoKubernetes-280962' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1014 20:07:14.975269  403014 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1014 20:07:14.975294  403014 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21409-364627/.minikube CaCertPath:/home/jenkins/minikube-integration/21409-364627/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21409-364627/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21409-364627/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21409-364627/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21409-364627/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21409-364627/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21409-364627/.minikube}
	I1014 20:07:14.975341  403014 buildroot.go:174] setting up certificates
	I1014 20:07:14.975377  403014 provision.go:84] configureAuth start
	I1014 20:07:14.975388  403014 main.go:141] libmachine: (NoKubernetes-280962) Calling .GetMachineName
	I1014 20:07:14.975723  403014 main.go:141] libmachine: (NoKubernetes-280962) Calling .GetIP
	I1014 20:07:14.978893  403014 main.go:141] libmachine: (NoKubernetes-280962) DBG | domain NoKubernetes-280962 has defined MAC address 52:54:00:bc:57:d8 in network mk-NoKubernetes-280962
	I1014 20:07:14.979354  403014 main.go:141] libmachine: (NoKubernetes-280962) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:57:d8", ip: ""} in network mk-NoKubernetes-280962: {Iface:virbr3 ExpiryTime:2025-10-14 21:07:11 +0000 UTC Type:0 Mac:52:54:00:bc:57:d8 Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:nokubernetes-280962 Clientid:01:52:54:00:bc:57:d8}
	I1014 20:07:14.979374  403014 main.go:141] libmachine: (NoKubernetes-280962) DBG | domain NoKubernetes-280962 has defined IP address 192.168.39.169 and MAC address 52:54:00:bc:57:d8 in network mk-NoKubernetes-280962
	I1014 20:07:14.979601  403014 main.go:141] libmachine: (NoKubernetes-280962) Calling .GetSSHHostname
	I1014 20:07:14.982706  403014 main.go:141] libmachine: (NoKubernetes-280962) DBG | domain NoKubernetes-280962 has defined MAC address 52:54:00:bc:57:d8 in network mk-NoKubernetes-280962
	I1014 20:07:14.983078  403014 main.go:141] libmachine: (NoKubernetes-280962) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:57:d8", ip: ""} in network mk-NoKubernetes-280962: {Iface:virbr3 ExpiryTime:2025-10-14 21:07:11 +0000 UTC Type:0 Mac:52:54:00:bc:57:d8 Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:nokubernetes-280962 Clientid:01:52:54:00:bc:57:d8}
	I1014 20:07:14.983121  403014 main.go:141] libmachine: (NoKubernetes-280962) DBG | domain NoKubernetes-280962 has defined IP address 192.168.39.169 and MAC address 52:54:00:bc:57:d8 in network mk-NoKubernetes-280962
	I1014 20:07:14.983244  403014 provision.go:143] copyHostCerts
	I1014 20:07:14.983304  403014 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-364627/.minikube/cert.pem, removing ...
	I1014 20:07:14.983332  403014 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-364627/.minikube/cert.pem
	I1014 20:07:14.983432  403014 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-364627/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21409-364627/.minikube/cert.pem (1123 bytes)
	I1014 20:07:14.983552  403014 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-364627/.minikube/key.pem, removing ...
	I1014 20:07:14.983557  403014 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-364627/.minikube/key.pem
	I1014 20:07:14.983601  403014 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-364627/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21409-364627/.minikube/key.pem (1675 bytes)
	I1014 20:07:14.983689  403014 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-364627/.minikube/ca.pem, removing ...
	I1014 20:07:14.983694  403014 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-364627/.minikube/ca.pem
	I1014 20:07:14.983731  403014 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-364627/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21409-364627/.minikube/ca.pem (1082 bytes)
	I1014 20:07:14.983787  403014 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21409-364627/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21409-364627/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21409-364627/.minikube/certs/ca-key.pem org=jenkins.NoKubernetes-280962 san=[127.0.0.1 192.168.39.169 NoKubernetes-280962 localhost minikube]
	I1014 20:07:15.033267  403014 provision.go:177] copyRemoteCerts
	I1014 20:07:15.033336  403014 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1014 20:07:15.033369  403014 main.go:141] libmachine: (NoKubernetes-280962) Calling .GetSSHHostname
	I1014 20:07:15.037006  403014 main.go:141] libmachine: (NoKubernetes-280962) DBG | domain NoKubernetes-280962 has defined MAC address 52:54:00:bc:57:d8 in network mk-NoKubernetes-280962
	I1014 20:07:15.037371  403014 main.go:141] libmachine: (NoKubernetes-280962) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:57:d8", ip: ""} in network mk-NoKubernetes-280962: {Iface:virbr3 ExpiryTime:2025-10-14 21:07:11 +0000 UTC Type:0 Mac:52:54:00:bc:57:d8 Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:nokubernetes-280962 Clientid:01:52:54:00:bc:57:d8}
	I1014 20:07:15.037387  403014 main.go:141] libmachine: (NoKubernetes-280962) DBG | domain NoKubernetes-280962 has defined IP address 192.168.39.169 and MAC address 52:54:00:bc:57:d8 in network mk-NoKubernetes-280962
	I1014 20:07:15.037628  403014 main.go:141] libmachine: (NoKubernetes-280962) Calling .GetSSHPort
	I1014 20:07:15.037821  403014 main.go:141] libmachine: (NoKubernetes-280962) Calling .GetSSHKeyPath
	I1014 20:07:15.037965  403014 main.go:141] libmachine: (NoKubernetes-280962) Calling .GetSSHUsername
	I1014 20:07:15.038217  403014 sshutil.go:53] new ssh client: &{IP:192.168.39.169 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21409-364627/.minikube/machines/NoKubernetes-280962/id_rsa Username:docker}
	I1014 20:07:15.132625  403014 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-364627/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1014 20:07:15.166911  403014 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-364627/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1014 20:07:15.199348  403014 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-364627/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1014 20:07:15.232051  403014 provision.go:87] duration metric: took 256.658126ms to configureAuth
	I1014 20:07:15.232080  403014 buildroot.go:189] setting minikube options for container-runtime
	I1014 20:07:15.232266  403014 config.go:182] Loaded profile config "NoKubernetes-280962": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v0.0.0
	I1014 20:07:15.232401  403014 main.go:141] libmachine: (NoKubernetes-280962) Calling .GetSSHHostname
	I1014 20:07:15.235938  403014 main.go:141] libmachine: (NoKubernetes-280962) DBG | domain NoKubernetes-280962 has defined MAC address 52:54:00:bc:57:d8 in network mk-NoKubernetes-280962
	I1014 20:07:15.236370  403014 main.go:141] libmachine: (NoKubernetes-280962) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:57:d8", ip: ""} in network mk-NoKubernetes-280962: {Iface:virbr3 ExpiryTime:2025-10-14 21:07:11 +0000 UTC Type:0 Mac:52:54:00:bc:57:d8 Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:nokubernetes-280962 Clientid:01:52:54:00:bc:57:d8}
	I1014 20:07:15.236393  403014 main.go:141] libmachine: (NoKubernetes-280962) DBG | domain NoKubernetes-280962 has defined IP address 192.168.39.169 and MAC address 52:54:00:bc:57:d8 in network mk-NoKubernetes-280962
	I1014 20:07:15.236679  403014 main.go:141] libmachine: (NoKubernetes-280962) Calling .GetSSHPort
	I1014 20:07:15.236871  403014 main.go:141] libmachine: (NoKubernetes-280962) Calling .GetSSHKeyPath
	I1014 20:07:15.237028  403014 main.go:141] libmachine: (NoKubernetes-280962) Calling .GetSSHKeyPath
	I1014 20:07:15.237161  403014 main.go:141] libmachine: (NoKubernetes-280962) Calling .GetSSHUsername
	I1014 20:07:15.237446  403014 main.go:141] libmachine: Using SSH client type: native
	I1014 20:07:15.237674  403014 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.169 22 <nil> <nil>}
	I1014 20:07:15.237686  403014 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1014 20:07:15.495629  403014 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1014 20:07:15.495644  403014 machine.go:96] duration metric: took 927.704754ms to provisionDockerMachine
	I1014 20:07:15.495655  403014 start.go:293] postStartSetup for "NoKubernetes-280962" (driver="kvm2")
	I1014 20:07:15.495663  403014 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1014 20:07:15.495680  403014 main.go:141] libmachine: (NoKubernetes-280962) Calling .DriverName
	I1014 20:07:15.496063  403014 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1014 20:07:15.496088  403014 main.go:141] libmachine: (NoKubernetes-280962) Calling .GetSSHHostname
	I1014 20:07:15.499485  403014 main.go:141] libmachine: (NoKubernetes-280962) DBG | domain NoKubernetes-280962 has defined MAC address 52:54:00:bc:57:d8 in network mk-NoKubernetes-280962
	I1014 20:07:15.500025  403014 main.go:141] libmachine: (NoKubernetes-280962) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:57:d8", ip: ""} in network mk-NoKubernetes-280962: {Iface:virbr3 ExpiryTime:2025-10-14 21:07:11 +0000 UTC Type:0 Mac:52:54:00:bc:57:d8 Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:nokubernetes-280962 Clientid:01:52:54:00:bc:57:d8}
	I1014 20:07:15.500057  403014 main.go:141] libmachine: (NoKubernetes-280962) DBG | domain NoKubernetes-280962 has defined IP address 192.168.39.169 and MAC address 52:54:00:bc:57:d8 in network mk-NoKubernetes-280962
	I1014 20:07:15.500307  403014 main.go:141] libmachine: (NoKubernetes-280962) Calling .GetSSHPort
	I1014 20:07:15.500570  403014 main.go:141] libmachine: (NoKubernetes-280962) Calling .GetSSHKeyPath
	I1014 20:07:15.500734  403014 main.go:141] libmachine: (NoKubernetes-280962) Calling .GetSSHUsername
	I1014 20:07:15.500869  403014 sshutil.go:53] new ssh client: &{IP:192.168.39.169 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21409-364627/.minikube/machines/NoKubernetes-280962/id_rsa Username:docker}
	I1014 20:07:15.591656  403014 ssh_runner.go:195] Run: cat /etc/os-release
	I1014 20:07:15.596672  403014 info.go:137] Remote host: Buildroot 2025.02
	I1014 20:07:15.596691  403014 filesync.go:126] Scanning /home/jenkins/minikube-integration/21409-364627/.minikube/addons for local assets ...
	I1014 20:07:15.596761  403014 filesync.go:126] Scanning /home/jenkins/minikube-integration/21409-364627/.minikube/files for local assets ...
	I1014 20:07:15.596829  403014 filesync.go:149] local asset: /home/jenkins/minikube-integration/21409-364627/.minikube/files/etc/ssl/certs/3686342.pem -> 3686342.pem in /etc/ssl/certs
	I1014 20:07:15.596908  403014 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1014 20:07:15.608663  403014 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-364627/.minikube/files/etc/ssl/certs/3686342.pem --> /etc/ssl/certs/3686342.pem (1708 bytes)
	I1014 20:07:15.639335  403014 start.go:296] duration metric: took 143.645022ms for postStartSetup
	I1014 20:07:15.639406  403014 fix.go:56] duration metric: took 17.184487135s for fixHost
	I1014 20:07:15.639437  403014 main.go:141] libmachine: (NoKubernetes-280962) Calling .GetSSHHostname
	I1014 20:07:15.642562  403014 main.go:141] libmachine: (NoKubernetes-280962) DBG | domain NoKubernetes-280962 has defined MAC address 52:54:00:bc:57:d8 in network mk-NoKubernetes-280962
	I1014 20:07:15.643138  403014 main.go:141] libmachine: (NoKubernetes-280962) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:57:d8", ip: ""} in network mk-NoKubernetes-280962: {Iface:virbr3 ExpiryTime:2025-10-14 21:07:11 +0000 UTC Type:0 Mac:52:54:00:bc:57:d8 Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:nokubernetes-280962 Clientid:01:52:54:00:bc:57:d8}
	I1014 20:07:15.643167  403014 main.go:141] libmachine: (NoKubernetes-280962) DBG | domain NoKubernetes-280962 has defined IP address 192.168.39.169 and MAC address 52:54:00:bc:57:d8 in network mk-NoKubernetes-280962
	I1014 20:07:15.643387  403014 main.go:141] libmachine: (NoKubernetes-280962) Calling .GetSSHPort
	I1014 20:07:15.643622  403014 main.go:141] libmachine: (NoKubernetes-280962) Calling .GetSSHKeyPath
	I1014 20:07:15.643819  403014 main.go:141] libmachine: (NoKubernetes-280962) Calling .GetSSHKeyPath
	I1014 20:07:15.643993  403014 main.go:141] libmachine: (NoKubernetes-280962) Calling .GetSSHUsername
	I1014 20:07:15.644209  403014 main.go:141] libmachine: Using SSH client type: native
	I1014 20:07:15.644470  403014 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.169 22 <nil> <nil>}
	I1014 20:07:15.644475  403014 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1014 20:07:15.761836  403014 main.go:141] libmachine: SSH cmd err, output: <nil>: 1760472435.722128913
	
	I1014 20:07:15.761849  403014 fix.go:216] guest clock: 1760472435.722128913
	I1014 20:07:15.761856  403014 fix.go:229] Guest: 2025-10-14 20:07:15.722128913 +0000 UTC Remote: 2025-10-14 20:07:15.639412586 +0000 UTC m=+38.278899362 (delta=82.716327ms)
	I1014 20:07:15.761875  403014 fix.go:200] guest clock delta is within tolerance: 82.716327ms
	I1014 20:07:15.761879  403014 start.go:83] releasing machines lock for "NoKubernetes-280962", held for 17.307017522s
	I1014 20:07:15.761904  403014 main.go:141] libmachine: (NoKubernetes-280962) Calling .DriverName
	I1014 20:07:15.762236  403014 main.go:141] libmachine: (NoKubernetes-280962) Calling .GetIP
	I1014 20:07:15.765938  403014 main.go:141] libmachine: (NoKubernetes-280962) DBG | domain NoKubernetes-280962 has defined MAC address 52:54:00:bc:57:d8 in network mk-NoKubernetes-280962
	I1014 20:07:15.766287  403014 main.go:141] libmachine: (NoKubernetes-280962) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:57:d8", ip: ""} in network mk-NoKubernetes-280962: {Iface:virbr3 ExpiryTime:2025-10-14 21:07:11 +0000 UTC Type:0 Mac:52:54:00:bc:57:d8 Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:nokubernetes-280962 Clientid:01:52:54:00:bc:57:d8}
	I1014 20:07:15.766324  403014 main.go:141] libmachine: (NoKubernetes-280962) DBG | domain NoKubernetes-280962 has defined IP address 192.168.39.169 and MAC address 52:54:00:bc:57:d8 in network mk-NoKubernetes-280962
	I1014 20:07:15.766506  403014 main.go:141] libmachine: (NoKubernetes-280962) Calling .DriverName
	I1014 20:07:15.767090  403014 main.go:141] libmachine: (NoKubernetes-280962) Calling .DriverName
	I1014 20:07:15.767373  403014 main.go:141] libmachine: (NoKubernetes-280962) Calling .DriverName
	I1014 20:07:15.767491  403014 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1014 20:07:15.767547  403014 main.go:141] libmachine: (NoKubernetes-280962) Calling .GetSSHHostname
	I1014 20:07:15.767582  403014 ssh_runner.go:195] Run: cat /version.json
	I1014 20:07:15.767602  403014 main.go:141] libmachine: (NoKubernetes-280962) Calling .GetSSHHostname
	I1014 20:07:15.771010  403014 main.go:141] libmachine: (NoKubernetes-280962) DBG | domain NoKubernetes-280962 has defined MAC address 52:54:00:bc:57:d8 in network mk-NoKubernetes-280962
	I1014 20:07:15.771244  403014 main.go:141] libmachine: (NoKubernetes-280962) DBG | domain NoKubernetes-280962 has defined MAC address 52:54:00:bc:57:d8 in network mk-NoKubernetes-280962
	I1014 20:07:15.771492  403014 main.go:141] libmachine: (NoKubernetes-280962) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:57:d8", ip: ""} in network mk-NoKubernetes-280962: {Iface:virbr3 ExpiryTime:2025-10-14 21:07:11 +0000 UTC Type:0 Mac:52:54:00:bc:57:d8 Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:nokubernetes-280962 Clientid:01:52:54:00:bc:57:d8}
	I1014 20:07:15.771518  403014 main.go:141] libmachine: (NoKubernetes-280962) DBG | domain NoKubernetes-280962 has defined IP address 192.168.39.169 and MAC address 52:54:00:bc:57:d8 in network mk-NoKubernetes-280962
	I1014 20:07:15.771769  403014 main.go:141] libmachine: (NoKubernetes-280962) Calling .GetSSHPort
	I1014 20:07:15.771770  403014 main.go:141] libmachine: (NoKubernetes-280962) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bc:57:d8", ip: ""} in network mk-NoKubernetes-280962: {Iface:virbr3 ExpiryTime:2025-10-14 21:07:11 +0000 UTC Type:0 Mac:52:54:00:bc:57:d8 Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:nokubernetes-280962 Clientid:01:52:54:00:bc:57:d8}
	I1014 20:07:15.771789  403014 main.go:141] libmachine: (NoKubernetes-280962) DBG | domain NoKubernetes-280962 has defined IP address 192.168.39.169 and MAC address 52:54:00:bc:57:d8 in network mk-NoKubernetes-280962
	I1014 20:07:15.772013  403014 main.go:141] libmachine: (NoKubernetes-280962) Calling .GetSSHPort
	I1014 20:07:15.772018  403014 main.go:141] libmachine: (NoKubernetes-280962) Calling .GetSSHKeyPath
	I1014 20:07:15.772210  403014 main.go:141] libmachine: (NoKubernetes-280962) Calling .GetSSHUsername
	I1014 20:07:15.772276  403014 main.go:141] libmachine: (NoKubernetes-280962) Calling .GetSSHKeyPath
	I1014 20:07:15.772442  403014 sshutil.go:53] new ssh client: &{IP:192.168.39.169 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21409-364627/.minikube/machines/NoKubernetes-280962/id_rsa Username:docker}
	I1014 20:07:15.772600  403014 main.go:141] libmachine: (NoKubernetes-280962) Calling .GetSSHUsername
	I1014 20:07:15.772860  403014 sshutil.go:53] new ssh client: &{IP:192.168.39.169 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21409-364627/.minikube/machines/NoKubernetes-280962/id_rsa Username:docker}
	I1014 20:07:15.888189  403014 ssh_runner.go:195] Run: systemctl --version
	I1014 20:07:15.895280  403014 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1014 20:07:16.045127  403014 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1014 20:07:16.053865  403014 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1014 20:07:16.053933  403014 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1014 20:07:16.075421  403014 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1014 20:07:16.075437  403014 start.go:495] detecting cgroup driver to use...
	I1014 20:07:16.075498  403014 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1014 20:07:16.097187  403014 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1014 20:07:16.118466  403014 docker.go:218] disabling cri-docker service (if available) ...
	I1014 20:07:16.118547  403014 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1014 20:07:16.138902  403014 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1014 20:07:16.158206  403014 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1014 20:07:16.320619  403014 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1014 20:07:16.550704  403014 docker.go:234] disabling docker service ...
	I1014 20:07:16.550792  403014 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1014 20:07:16.568463  403014 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1014 20:07:16.584268  403014 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1014 20:07:16.743503  403014 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1014 20:07:16.892091  403014 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1014 20:07:16.909382  403014 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1014 20:07:16.933200  403014 download.go:108] Downloading: https://dl.k8s.io/release/v0.0.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v0.0.0/bin/linux/amd64/kubeadm.sha1 -> /home/jenkins/minikube-integration/21409-364627/.minikube/cache/linux/amd64/v0.0.0/kubeadm
	W1014 20:07:14.553534  402335 pod_ready.go:104] pod "coredns-66bc5c9577-mkw7n" is not "Ready", error: <nil>
	W1014 20:07:16.555102  402335 pod_ready.go:104] pod "coredns-66bc5c9577-mkw7n" is not "Ready", error: <nil>
	I1014 20:07:17.554512  402335 pod_ready.go:94] pod "coredns-66bc5c9577-mkw7n" is "Ready"
	I1014 20:07:17.554564  402335 pod_ready.go:86] duration metric: took 9.507602066s for pod "coredns-66bc5c9577-mkw7n" in "kube-system" namespace to be "Ready" or be gone ...
	I1014 20:07:17.558817  402335 pod_ready.go:83] waiting for pod "etcd-pause-488160" in "kube-system" namespace to be "Ready" or be gone ...
	I1014 20:07:17.564691  402335 pod_ready.go:94] pod "etcd-pause-488160" is "Ready"
	I1014 20:07:17.564727  402335 pod_ready.go:86] duration metric: took 5.881414ms for pod "etcd-pause-488160" in "kube-system" namespace to be "Ready" or be gone ...
	I1014 20:07:17.567540  402335 pod_ready.go:83] waiting for pod "kube-apiserver-pause-488160" in "kube-system" namespace to be "Ready" or be gone ...
	I1014 20:07:17.574867  402335 pod_ready.go:94] pod "kube-apiserver-pause-488160" is "Ready"
	I1014 20:07:17.574899  402335 pod_ready.go:86] duration metric: took 7.331437ms for pod "kube-apiserver-pause-488160" in "kube-system" namespace to be "Ready" or be gone ...
	I1014 20:07:17.577238  402335 pod_ready.go:83] waiting for pod "kube-controller-manager-pause-488160" in "kube-system" namespace to be "Ready" or be gone ...
	I1014 20:07:17.459277  403014 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1014 20:07:17.459370  403014 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 20:07:17.480857  403014 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1014 20:07:17.480930  403014 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 20:07:17.499565  403014 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 20:07:17.512844  403014 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 20:07:17.528723  403014 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1014 20:07:17.543529  403014 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1014 20:07:17.556509  403014 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1014 20:07:17.556569  403014 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1014 20:07:17.581775  403014 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1014 20:07:17.596737  403014 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 20:07:17.767900  403014 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1014 20:07:17.904079  403014 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1014 20:07:17.904146  403014 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1014 20:07:17.912046  403014 start.go:563] Will wait 60s for crictl version
	I1014 20:07:17.912111  403014 ssh_runner.go:195] Run: which crictl
	I1014 20:07:17.917719  403014 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1014 20:07:17.968701  403014 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1014 20:07:17.968802  403014 ssh_runner.go:195] Run: crio --version
	I1014 20:07:18.012261  403014 ssh_runner.go:195] Run: crio --version
	I1014 20:07:18.055698  403014 out.go:179] * Preparing CRI-O 1.29.1 ...
	I1014 20:07:18.057398  403014 ssh_runner.go:195] Run: rm -f paused
	I1014 20:07:18.064892  403014 out.go:179] * Done! minikube is ready without Kubernetes!
	I1014 20:07:18.068810  403014 out.go:203] ╭───────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                       │
	│                        * Things to try without Kubernetes ...                         │
	│                                                                                       │
	│    - "minikube ssh" to SSH into minikube's node.                                      │
	│    - "minikube podman-env" to point your podman-cli to the podman inside minikube.    │
	│    - "minikube image" to build images without docker.                                 │
	│                                                                                       │
	╰───────────────────────────────────────────────────────────────────────────────────────╯
	I1014 20:07:17.752239  402335 pod_ready.go:94] pod "kube-controller-manager-pause-488160" is "Ready"
	I1014 20:07:17.752276  402335 pod_ready.go:86] duration metric: took 175.005622ms for pod "kube-controller-manager-pause-488160" in "kube-system" namespace to be "Ready" or be gone ...
	I1014 20:07:17.952542  402335 pod_ready.go:83] waiting for pod "kube-proxy-7g2cw" in "kube-system" namespace to be "Ready" or be gone ...
	I1014 20:07:18.351472  402335 pod_ready.go:94] pod "kube-proxy-7g2cw" is "Ready"
	I1014 20:07:18.351507  402335 pod_ready.go:86] duration metric: took 398.930477ms for pod "kube-proxy-7g2cw" in "kube-system" namespace to be "Ready" or be gone ...
	I1014 20:07:18.552073  402335 pod_ready.go:83] waiting for pod "kube-scheduler-pause-488160" in "kube-system" namespace to be "Ready" or be gone ...
	I1014 20:07:18.952897  402335 pod_ready.go:94] pod "kube-scheduler-pause-488160" is "Ready"
	I1014 20:07:18.952936  402335 pod_ready.go:86] duration metric: took 400.832438ms for pod "kube-scheduler-pause-488160" in "kube-system" namespace to be "Ready" or be gone ...
	I1014 20:07:18.952951  402335 pod_ready.go:40] duration metric: took 10.912499429s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1014 20:07:19.023371  402335 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1014 20:07:19.024914  402335 out.go:179] * Done! kubectl is now configured to use "pause-488160" cluster and "default" namespace by default
	I1014 20:07:14.638881  403592 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1014 20:07:14.638941  403592 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21409-364627/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1014 20:07:14.638952  403592 cache.go:58] Caching tarball of preloaded images
	I1014 20:07:14.639063  403592 preload.go:233] Found /home/jenkins/minikube-integration/21409-364627/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1014 20:07:14.639078  403592 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1014 20:07:14.639191  403592 profile.go:143] Saving config to /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/kubernetes-upgrade-425560/config.json ...
	I1014 20:07:14.639430  403592 start.go:360] acquireMachinesLock for kubernetes-upgrade-425560: {Name:mk52d449be3ec71c122454fdb0aeda759b1051fc Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1014 20:07:15.764089  403305 out.go:252] * Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1014 20:07:15.764350  403305 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 20:07:15.764412  403305 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 20:07:15.781845  403305 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46331
	I1014 20:07:15.782367  403305 main.go:141] libmachine: () Calling .GetVersion
	I1014 20:07:15.782983  403305 main.go:141] libmachine: Using API Version  1
	I1014 20:07:15.783009  403305 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 20:07:15.783457  403305 main.go:141] libmachine: () Calling .GetMachineName
	I1014 20:07:15.783708  403305 main.go:141] libmachine: (force-systemd-env-702842) Calling .GetMachineName
	I1014 20:07:15.783897  403305 main.go:141] libmachine: (force-systemd-env-702842) Calling .DriverName
	I1014 20:07:15.784088  403305 start.go:159] libmachine.API.Create for "force-systemd-env-702842" (driver="kvm2")
	I1014 20:07:15.784123  403305 client.go:168] LocalClient.Create starting
	I1014 20:07:15.784163  403305 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21409-364627/.minikube/certs/ca.pem
	I1014 20:07:15.784207  403305 main.go:141] libmachine: Decoding PEM data...
	I1014 20:07:15.784231  403305 main.go:141] libmachine: Parsing certificate...
	I1014 20:07:15.784336  403305 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21409-364627/.minikube/certs/cert.pem
	I1014 20:07:15.784371  403305 main.go:141] libmachine: Decoding PEM data...
	I1014 20:07:15.784392  403305 main.go:141] libmachine: Parsing certificate...
	I1014 20:07:15.784417  403305 main.go:141] libmachine: Running pre-create checks...
	I1014 20:07:15.784437  403305 main.go:141] libmachine: (force-systemd-env-702842) Calling .PreCreateCheck
	I1014 20:07:15.784795  403305 main.go:141] libmachine: (force-systemd-env-702842) Calling .GetConfigRaw
	I1014 20:07:15.785267  403305 main.go:141] libmachine: Creating machine...
	I1014 20:07:15.785285  403305 main.go:141] libmachine: (force-systemd-env-702842) Calling .Create
	I1014 20:07:15.785426  403305 main.go:141] libmachine: (force-systemd-env-702842) creating domain...
	I1014 20:07:15.785460  403305 main.go:141] libmachine: (force-systemd-env-702842) creating network...
	I1014 20:07:15.786834  403305 main.go:141] libmachine: (force-systemd-env-702842) DBG | found existing default network
	I1014 20:07:15.787057  403305 main.go:141] libmachine: (force-systemd-env-702842) DBG | <network connections='3'>
	I1014 20:07:15.787079  403305 main.go:141] libmachine: (force-systemd-env-702842) DBG |   <name>default</name>
	I1014 20:07:15.787102  403305 main.go:141] libmachine: (force-systemd-env-702842) DBG |   <uuid>c61344c2-dba2-46dd-a21a-34776d235985</uuid>
	I1014 20:07:15.787127  403305 main.go:141] libmachine: (force-systemd-env-702842) DBG |   <forward mode='nat'>
	I1014 20:07:15.787138  403305 main.go:141] libmachine: (force-systemd-env-702842) DBG |     <nat>
	I1014 20:07:15.787149  403305 main.go:141] libmachine: (force-systemd-env-702842) DBG |       <port start='1024' end='65535'/>
	I1014 20:07:15.787159  403305 main.go:141] libmachine: (force-systemd-env-702842) DBG |     </nat>
	I1014 20:07:15.787170  403305 main.go:141] libmachine: (force-systemd-env-702842) DBG |   </forward>
	I1014 20:07:15.787181  403305 main.go:141] libmachine: (force-systemd-env-702842) DBG |   <bridge name='virbr0' stp='on' delay='0'/>
	I1014 20:07:15.787194  403305 main.go:141] libmachine: (force-systemd-env-702842) DBG |   <mac address='52:54:00:10:a2:1d'/>
	I1014 20:07:15.787223  403305 main.go:141] libmachine: (force-systemd-env-702842) DBG |   <ip address='192.168.122.1' netmask='255.255.255.0'>
	I1014 20:07:15.787239  403305 main.go:141] libmachine: (force-systemd-env-702842) DBG |     <dhcp>
	I1014 20:07:15.787251  403305 main.go:141] libmachine: (force-systemd-env-702842) DBG |       <range start='192.168.122.2' end='192.168.122.254'/>
	I1014 20:07:15.787261  403305 main.go:141] libmachine: (force-systemd-env-702842) DBG |     </dhcp>
	I1014 20:07:15.787271  403305 main.go:141] libmachine: (force-systemd-env-702842) DBG |   </ip>
	I1014 20:07:15.787281  403305 main.go:141] libmachine: (force-systemd-env-702842) DBG | </network>
	I1014 20:07:15.787325  403305 main.go:141] libmachine: (force-systemd-env-702842) DBG | 
	I1014 20:07:15.788359  403305 main.go:141] libmachine: (force-systemd-env-702842) DBG | I1014 20:07:15.788126  403649 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr3 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:72:ef:58} reservation:<nil>}
	I1014 20:07:15.788997  403305 main.go:141] libmachine: (force-systemd-env-702842) DBG | I1014 20:07:15.788897  403649 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:db:43:4c} reservation:<nil>}
	I1014 20:07:15.790097  403305 main.go:141] libmachine: (force-systemd-env-702842) DBG | I1014 20:07:15.789975  403649 network.go:206] using free private subnet 192.168.61.0/24: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00029a960}
	I1014 20:07:15.790117  403305 main.go:141] libmachine: (force-systemd-env-702842) DBG | defining private network:
	I1014 20:07:15.790128  403305 main.go:141] libmachine: (force-systemd-env-702842) DBG | 
	I1014 20:07:15.790141  403305 main.go:141] libmachine: (force-systemd-env-702842) DBG | <network>
	I1014 20:07:15.790153  403305 main.go:141] libmachine: (force-systemd-env-702842) DBG |   <name>mk-force-systemd-env-702842</name>
	I1014 20:07:15.790166  403305 main.go:141] libmachine: (force-systemd-env-702842) DBG |   <dns enable='no'/>
	I1014 20:07:15.790195  403305 main.go:141] libmachine: (force-systemd-env-702842) DBG |   <ip address='192.168.61.1' netmask='255.255.255.0'>
	I1014 20:07:15.790219  403305 main.go:141] libmachine: (force-systemd-env-702842) DBG |     <dhcp>
	I1014 20:07:15.790242  403305 main.go:141] libmachine: (force-systemd-env-702842) DBG |       <range start='192.168.61.2' end='192.168.61.253'/>
	I1014 20:07:15.790254  403305 main.go:141] libmachine: (force-systemd-env-702842) DBG |     </dhcp>
	I1014 20:07:15.790266  403305 main.go:141] libmachine: (force-systemd-env-702842) DBG |   </ip>
	I1014 20:07:15.790276  403305 main.go:141] libmachine: (force-systemd-env-702842) DBG | </network>
	I1014 20:07:15.790293  403305 main.go:141] libmachine: (force-systemd-env-702842) DBG | 
	I1014 20:07:15.796290  403305 main.go:141] libmachine: (force-systemd-env-702842) DBG | creating private network mk-force-systemd-env-702842 192.168.61.0/24...
	I1014 20:07:15.876540  403305 main.go:141] libmachine: (force-systemd-env-702842) DBG | private network mk-force-systemd-env-702842 192.168.61.0/24 created
	I1014 20:07:15.876779  403305 main.go:141] libmachine: (force-systemd-env-702842) DBG | <network>
	I1014 20:07:15.876804  403305 main.go:141] libmachine: (force-systemd-env-702842) DBG |   <name>mk-force-systemd-env-702842</name>
	I1014 20:07:15.876816  403305 main.go:141] libmachine: (force-systemd-env-702842) setting up store path in /home/jenkins/minikube-integration/21409-364627/.minikube/machines/force-systemd-env-702842 ...
	I1014 20:07:15.876841  403305 main.go:141] libmachine: (force-systemd-env-702842) building disk image from file:///home/jenkins/minikube-integration/21409-364627/.minikube/cache/iso/amd64/minikube-v1.37.0-1758198818-20370-amd64.iso
	I1014 20:07:15.876856  403305 main.go:141] libmachine: (force-systemd-env-702842) DBG |   <uuid>7eadc4a9-0b05-4605-ba59-67f6f2030e59</uuid>
	I1014 20:07:15.876867  403305 main.go:141] libmachine: (force-systemd-env-702842) DBG |   <bridge name='virbr4' stp='on' delay='0'/>
	I1014 20:07:15.876884  403305 main.go:141] libmachine: (force-systemd-env-702842) Downloading /home/jenkins/minikube-integration/21409-364627/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/21409-364627/.minikube/cache/iso/amd64/minikube-v1.37.0-1758198818-20370-amd64.iso...
	I1014 20:07:15.876908  403305 main.go:141] libmachine: (force-systemd-env-702842) DBG |   <mac address='52:54:00:31:23:6a'/>
	I1014 20:07:15.876921  403305 main.go:141] libmachine: (force-systemd-env-702842) DBG |   <dns enable='no'/>
	I1014 20:07:15.876934  403305 main.go:141] libmachine: (force-systemd-env-702842) DBG |   <ip address='192.168.61.1' netmask='255.255.255.0'>
	I1014 20:07:15.876942  403305 main.go:141] libmachine: (force-systemd-env-702842) DBG |     <dhcp>
	I1014 20:07:15.876954  403305 main.go:141] libmachine: (force-systemd-env-702842) DBG |       <range start='192.168.61.2' end='192.168.61.253'/>
	I1014 20:07:15.876963  403305 main.go:141] libmachine: (force-systemd-env-702842) DBG |     </dhcp>
	I1014 20:07:15.876973  403305 main.go:141] libmachine: (force-systemd-env-702842) DBG |   </ip>
	I1014 20:07:15.876992  403305 main.go:141] libmachine: (force-systemd-env-702842) DBG | </network>
	I1014 20:07:15.877005  403305 main.go:141] libmachine: (force-systemd-env-702842) DBG | 
	I1014 20:07:15.877026  403305 main.go:141] libmachine: (force-systemd-env-702842) DBG | I1014 20:07:15.876770  403649 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/21409-364627/.minikube
	I1014 20:07:16.165214  403305 main.go:141] libmachine: (force-systemd-env-702842) DBG | I1014 20:07:16.165070  403649 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/21409-364627/.minikube/machines/force-systemd-env-702842/id_rsa...
	I1014 20:07:16.289580  403305 main.go:141] libmachine: (force-systemd-env-702842) DBG | I1014 20:07:16.289413  403649 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/21409-364627/.minikube/machines/force-systemd-env-702842/force-systemd-env-702842.rawdisk...
	I1014 20:07:16.289622  403305 main.go:141] libmachine: (force-systemd-env-702842) DBG | Writing magic tar header
	I1014 20:07:16.289640  403305 main.go:141] libmachine: (force-systemd-env-702842) DBG | Writing SSH key tar header
	I1014 20:07:16.289653  403305 main.go:141] libmachine: (force-systemd-env-702842) DBG | I1014 20:07:16.289599  403649 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/21409-364627/.minikube/machines/force-systemd-env-702842 ...
	I1014 20:07:16.289725  403305 main.go:141] libmachine: (force-systemd-env-702842) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21409-364627/.minikube/machines/force-systemd-env-702842
	I1014 20:07:16.289755  403305 main.go:141] libmachine: (force-systemd-env-702842) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21409-364627/.minikube/machines
	I1014 20:07:16.289781  403305 main.go:141] libmachine: (force-systemd-env-702842) setting executable bit set on /home/jenkins/minikube-integration/21409-364627/.minikube/machines/force-systemd-env-702842 (perms=drwx------)
	I1014 20:07:16.289794  403305 main.go:141] libmachine: (force-systemd-env-702842) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21409-364627/.minikube
	I1014 20:07:16.289810  403305 main.go:141] libmachine: (force-systemd-env-702842) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21409-364627
	I1014 20:07:16.289823  403305 main.go:141] libmachine: (force-systemd-env-702842) DBG | checking permissions on dir: /home/jenkins/minikube-integration
	I1014 20:07:16.289837  403305 main.go:141] libmachine: (force-systemd-env-702842) DBG | checking permissions on dir: /home/jenkins
	I1014 20:07:16.289851  403305 main.go:141] libmachine: (force-systemd-env-702842) setting executable bit set on /home/jenkins/minikube-integration/21409-364627/.minikube/machines (perms=drwxr-xr-x)
	I1014 20:07:16.289860  403305 main.go:141] libmachine: (force-systemd-env-702842) DBG | checking permissions on dir: /home
	I1014 20:07:16.289874  403305 main.go:141] libmachine: (force-systemd-env-702842) DBG | skipping /home - not owner
	I1014 20:07:16.289891  403305 main.go:141] libmachine: (force-systemd-env-702842) setting executable bit set on /home/jenkins/minikube-integration/21409-364627/.minikube (perms=drwxr-xr-x)
	I1014 20:07:16.289905  403305 main.go:141] libmachine: (force-systemd-env-702842) setting executable bit set on /home/jenkins/minikube-integration/21409-364627 (perms=drwxrwxr-x)
	I1014 20:07:16.289930  403305 main.go:141] libmachine: (force-systemd-env-702842) setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1014 20:07:16.289948  403305 main.go:141] libmachine: (force-systemd-env-702842) setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1014 20:07:16.289961  403305 main.go:141] libmachine: (force-systemd-env-702842) defining domain...
	I1014 20:07:16.291243  403305 main.go:141] libmachine: (force-systemd-env-702842) defining domain using XML: 
	I1014 20:07:16.291267  403305 main.go:141] libmachine: (force-systemd-env-702842) <domain type='kvm'>
	I1014 20:07:16.291305  403305 main.go:141] libmachine: (force-systemd-env-702842)   <name>force-systemd-env-702842</name>
	I1014 20:07:16.291373  403305 main.go:141] libmachine: (force-systemd-env-702842)   <memory unit='MiB'>3072</memory>
	I1014 20:07:16.291417  403305 main.go:141] libmachine: (force-systemd-env-702842)   <vcpu>2</vcpu>
	I1014 20:07:16.291458  403305 main.go:141] libmachine: (force-systemd-env-702842)   <features>
	I1014 20:07:16.291471  403305 main.go:141] libmachine: (force-systemd-env-702842)     <acpi/>
	I1014 20:07:16.291481  403305 main.go:141] libmachine: (force-systemd-env-702842)     <apic/>
	I1014 20:07:16.291492  403305 main.go:141] libmachine: (force-systemd-env-702842)     <pae/>
	I1014 20:07:16.291499  403305 main.go:141] libmachine: (force-systemd-env-702842)   </features>
	I1014 20:07:16.291508  403305 main.go:141] libmachine: (force-systemd-env-702842)   <cpu mode='host-passthrough'>
	I1014 20:07:16.291514  403305 main.go:141] libmachine: (force-systemd-env-702842)   </cpu>
	I1014 20:07:16.291521  403305 main.go:141] libmachine: (force-systemd-env-702842)   <os>
	I1014 20:07:16.291530  403305 main.go:141] libmachine: (force-systemd-env-702842)     <type>hvm</type>
	I1014 20:07:16.291538  403305 main.go:141] libmachine: (force-systemd-env-702842)     <boot dev='cdrom'/>
	I1014 20:07:16.291546  403305 main.go:141] libmachine: (force-systemd-env-702842)     <boot dev='hd'/>
	I1014 20:07:16.291582  403305 main.go:141] libmachine: (force-systemd-env-702842)     <bootmenu enable='no'/>
	I1014 20:07:16.291617  403305 main.go:141] libmachine: (force-systemd-env-702842)   </os>
	I1014 20:07:16.291627  403305 main.go:141] libmachine: (force-systemd-env-702842)   <devices>
	I1014 20:07:16.291636  403305 main.go:141] libmachine: (force-systemd-env-702842)     <disk type='file' device='cdrom'>
	I1014 20:07:16.291653  403305 main.go:141] libmachine: (force-systemd-env-702842)       <source file='/home/jenkins/minikube-integration/21409-364627/.minikube/machines/force-systemd-env-702842/boot2docker.iso'/>
	I1014 20:07:16.291667  403305 main.go:141] libmachine: (force-systemd-env-702842)       <target dev='hdc' bus='scsi'/>
	I1014 20:07:16.291693  403305 main.go:141] libmachine: (force-systemd-env-702842)       <readonly/>
	I1014 20:07:16.291711  403305 main.go:141] libmachine: (force-systemd-env-702842)     </disk>
	I1014 20:07:16.291722  403305 main.go:141] libmachine: (force-systemd-env-702842)     <disk type='file' device='disk'>
	I1014 20:07:16.291733  403305 main.go:141] libmachine: (force-systemd-env-702842)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1014 20:07:16.291748  403305 main.go:141] libmachine: (force-systemd-env-702842)       <source file='/home/jenkins/minikube-integration/21409-364627/.minikube/machines/force-systemd-env-702842/force-systemd-env-702842.rawdisk'/>
	I1014 20:07:16.291760  403305 main.go:141] libmachine: (force-systemd-env-702842)       <target dev='hda' bus='virtio'/>
	I1014 20:07:16.291768  403305 main.go:141] libmachine: (force-systemd-env-702842)     </disk>
	I1014 20:07:16.291779  403305 main.go:141] libmachine: (force-systemd-env-702842)     <interface type='network'>
	I1014 20:07:16.291805  403305 main.go:141] libmachine: (force-systemd-env-702842)       <source network='mk-force-systemd-env-702842'/>
	I1014 20:07:16.291825  403305 main.go:141] libmachine: (force-systemd-env-702842)       <model type='virtio'/>
	I1014 20:07:16.291834  403305 main.go:141] libmachine: (force-systemd-env-702842)     </interface>
	I1014 20:07:16.291849  403305 main.go:141] libmachine: (force-systemd-env-702842)     <interface type='network'>
	I1014 20:07:16.291873  403305 main.go:141] libmachine: (force-systemd-env-702842)       <source network='default'/>
	I1014 20:07:16.291893  403305 main.go:141] libmachine: (force-systemd-env-702842)       <model type='virtio'/>
	I1014 20:07:16.291905  403305 main.go:141] libmachine: (force-systemd-env-702842)     </interface>
	I1014 20:07:16.291913  403305 main.go:141] libmachine: (force-systemd-env-702842)     <serial type='pty'>
	I1014 20:07:16.291925  403305 main.go:141] libmachine: (force-systemd-env-702842)       <target port='0'/>
	I1014 20:07:16.291932  403305 main.go:141] libmachine: (force-systemd-env-702842)     </serial>
	I1014 20:07:16.291946  403305 main.go:141] libmachine: (force-systemd-env-702842)     <console type='pty'>
	I1014 20:07:16.291957  403305 main.go:141] libmachine: (force-systemd-env-702842)       <target type='serial' port='0'/>
	I1014 20:07:16.291969  403305 main.go:141] libmachine: (force-systemd-env-702842)     </console>
	I1014 20:07:16.291976  403305 main.go:141] libmachine: (force-systemd-env-702842)     <rng model='virtio'>
	I1014 20:07:16.291990  403305 main.go:141] libmachine: (force-systemd-env-702842)       <backend model='random'>/dev/random</backend>
	I1014 20:07:16.292002  403305 main.go:141] libmachine: (force-systemd-env-702842)     </rng>
	I1014 20:07:16.292014  403305 main.go:141] libmachine: (force-systemd-env-702842)   </devices>
	I1014 20:07:16.292020  403305 main.go:141] libmachine: (force-systemd-env-702842) </domain>
	I1014 20:07:16.292034  403305 main.go:141] libmachine: (force-systemd-env-702842) 
	I1014 20:07:16.296503  403305 main.go:141] libmachine: (force-systemd-env-702842) DBG | domain force-systemd-env-702842 has defined MAC address 52:54:00:94:39:10 in network default
	I1014 20:07:16.297210  403305 main.go:141] libmachine: (force-systemd-env-702842) starting domain...
	I1014 20:07:16.297231  403305 main.go:141] libmachine: (force-systemd-env-702842) ensuring networks are active...
	I1014 20:07:16.297245  403305 main.go:141] libmachine: (force-systemd-env-702842) DBG | domain force-systemd-env-702842 has defined MAC address 52:54:00:58:5f:88 in network mk-force-systemd-env-702842
	I1014 20:07:16.298172  403305 main.go:141] libmachine: (force-systemd-env-702842) Ensuring network default is active
	I1014 20:07:16.298583  403305 main.go:141] libmachine: (force-systemd-env-702842) Ensuring network mk-force-systemd-env-702842 is active
	I1014 20:07:16.299267  403305 main.go:141] libmachine: (force-systemd-env-702842) getting domain XML...
	I1014 20:07:16.300412  403305 main.go:141] libmachine: (force-systemd-env-702842) DBG | starting domain XML:
	I1014 20:07:16.300427  403305 main.go:141] libmachine: (force-systemd-env-702842) DBG | <domain type='kvm'>
	I1014 20:07:16.300438  403305 main.go:141] libmachine: (force-systemd-env-702842) DBG |   <name>force-systemd-env-702842</name>
	I1014 20:07:16.300448  403305 main.go:141] libmachine: (force-systemd-env-702842) DBG |   <uuid>b46c804d-90e9-4346-9af6-1059f5bccbf3</uuid>
	I1014 20:07:16.300474  403305 main.go:141] libmachine: (force-systemd-env-702842) DBG |   <memory unit='KiB'>3145728</memory>
	I1014 20:07:16.300489  403305 main.go:141] libmachine: (force-systemd-env-702842) DBG |   <currentMemory unit='KiB'>3145728</currentMemory>
	I1014 20:07:16.300499  403305 main.go:141] libmachine: (force-systemd-env-702842) DBG |   <vcpu placement='static'>2</vcpu>
	I1014 20:07:16.300506  403305 main.go:141] libmachine: (force-systemd-env-702842) DBG |   <os>
	I1014 20:07:16.300527  403305 main.go:141] libmachine: (force-systemd-env-702842) DBG |     <type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
	I1014 20:07:16.300543  403305 main.go:141] libmachine: (force-systemd-env-702842) DBG |     <boot dev='cdrom'/>
	I1014 20:07:16.300571  403305 main.go:141] libmachine: (force-systemd-env-702842) DBG |     <boot dev='hd'/>
	I1014 20:07:16.300579  403305 main.go:141] libmachine: (force-systemd-env-702842) DBG |     <bootmenu enable='no'/>
	I1014 20:07:16.300591  403305 main.go:141] libmachine: (force-systemd-env-702842) DBG |   </os>
	I1014 20:07:16.300599  403305 main.go:141] libmachine: (force-systemd-env-702842) DBG |   <features>
	I1014 20:07:16.300611  403305 main.go:141] libmachine: (force-systemd-env-702842) DBG |     <acpi/>
	I1014 20:07:16.300619  403305 main.go:141] libmachine: (force-systemd-env-702842) DBG |     <apic/>
	I1014 20:07:16.300648  403305 main.go:141] libmachine: (force-systemd-env-702842) DBG |     <pae/>
	I1014 20:07:16.300667  403305 main.go:141] libmachine: (force-systemd-env-702842) DBG |   </features>
	I1014 20:07:16.300680  403305 main.go:141] libmachine: (force-systemd-env-702842) DBG |   <cpu mode='host-passthrough' check='none' migratable='on'/>
	I1014 20:07:16.300687  403305 main.go:141] libmachine: (force-systemd-env-702842) DBG |   <clock offset='utc'/>
	I1014 20:07:16.300697  403305 main.go:141] libmachine: (force-systemd-env-702842) DBG |   <on_poweroff>destroy</on_poweroff>
	I1014 20:07:16.300704  403305 main.go:141] libmachine: (force-systemd-env-702842) DBG |   <on_reboot>restart</on_reboot>
	I1014 20:07:16.300713  403305 main.go:141] libmachine: (force-systemd-env-702842) DBG |   <on_crash>destroy</on_crash>
	I1014 20:07:16.300722  403305 main.go:141] libmachine: (force-systemd-env-702842) DBG |   <devices>
	I1014 20:07:16.300732  403305 main.go:141] libmachine: (force-systemd-env-702842) DBG |     <emulator>/usr/bin/qemu-system-x86_64</emulator>
	I1014 20:07:16.300749  403305 main.go:141] libmachine: (force-systemd-env-702842) DBG |     <disk type='file' device='cdrom'>
	I1014 20:07:16.300774  403305 main.go:141] libmachine: (force-systemd-env-702842) DBG |       <driver name='qemu' type='raw'/>
	I1014 20:07:16.300787  403305 main.go:141] libmachine: (force-systemd-env-702842) DBG |       <source file='/home/jenkins/minikube-integration/21409-364627/.minikube/machines/force-systemd-env-702842/boot2docker.iso'/>
	I1014 20:07:16.300792  403305 main.go:141] libmachine: (force-systemd-env-702842) DBG |       <target dev='hdc' bus='scsi'/>
	I1014 20:07:16.300799  403305 main.go:141] libmachine: (force-systemd-env-702842) DBG |       <readonly/>
	I1014 20:07:16.300807  403305 main.go:141] libmachine: (force-systemd-env-702842) DBG |       <address type='drive' controller='0' bus='0' target='0' unit='2'/>
	I1014 20:07:16.300813  403305 main.go:141] libmachine: (force-systemd-env-702842) DBG |     </disk>
	I1014 20:07:16.300820  403305 main.go:141] libmachine: (force-systemd-env-702842) DBG |     <disk type='file' device='disk'>
	I1014 20:07:16.300831  403305 main.go:141] libmachine: (force-systemd-env-702842) DBG |       <driver name='qemu' type='raw' io='threads'/>
	I1014 20:07:16.300847  403305 main.go:141] libmachine: (force-systemd-env-702842) DBG |       <source file='/home/jenkins/minikube-integration/21409-364627/.minikube/machines/force-systemd-env-702842/force-systemd-env-702842.rawdisk'/>
	I1014 20:07:16.300860  403305 main.go:141] libmachine: (force-systemd-env-702842) DBG |       <target dev='hda' bus='virtio'/>
	I1014 20:07:16.300872  403305 main.go:141] libmachine: (force-systemd-env-702842) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
	I1014 20:07:16.300881  403305 main.go:141] libmachine: (force-systemd-env-702842) DBG |     </disk>
	I1014 20:07:16.300886  403305 main.go:141] libmachine: (force-systemd-env-702842) DBG |     <controller type='usb' index='0' model='piix3-uhci'>
	I1014 20:07:16.300894  403305 main.go:141] libmachine: (force-systemd-env-702842) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
	I1014 20:07:16.300904  403305 main.go:141] libmachine: (force-systemd-env-702842) DBG |     </controller>
	I1014 20:07:16.300915  403305 main.go:141] libmachine: (force-systemd-env-702842) DBG |     <controller type='pci' index='0' model='pci-root'/>
	I1014 20:07:16.300938  403305 main.go:141] libmachine: (force-systemd-env-702842) DBG |     <controller type='scsi' index='0' model='lsilogic'>
	I1014 20:07:16.300960  403305 main.go:141] libmachine: (force-systemd-env-702842) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
	I1014 20:07:16.300973  403305 main.go:141] libmachine: (force-systemd-env-702842) DBG |     </controller>
	I1014 20:07:16.300986  403305 main.go:141] libmachine: (force-systemd-env-702842) DBG |     <interface type='network'>
	I1014 20:07:16.301000  403305 main.go:141] libmachine: (force-systemd-env-702842) DBG |       <mac address='52:54:00:58:5f:88'/>
	I1014 20:07:16.301012  403305 main.go:141] libmachine: (force-systemd-env-702842) DBG |       <source network='mk-force-systemd-env-702842'/>
	I1014 20:07:16.301025  403305 main.go:141] libmachine: (force-systemd-env-702842) DBG |       <model type='virtio'/>
	I1014 20:07:16.301042  403305 main.go:141] libmachine: (force-systemd-env-702842) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
	I1014 20:07:16.301060  403305 main.go:141] libmachine: (force-systemd-env-702842) DBG |     </interface>
	I1014 20:07:16.301072  403305 main.go:141] libmachine: (force-systemd-env-702842) DBG |     <interface type='network'>
	I1014 20:07:16.301084  403305 main.go:141] libmachine: (force-systemd-env-702842) DBG |       <mac address='52:54:00:94:39:10'/>
	I1014 20:07:16.301096  403305 main.go:141] libmachine: (force-systemd-env-702842) DBG |       <source network='default'/>
	I1014 20:07:16.301117  403305 main.go:141] libmachine: (force-systemd-env-702842) DBG |       <model type='virtio'/>
	I1014 20:07:16.301139  403305 main.go:141] libmachine: (force-systemd-env-702842) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
	I1014 20:07:16.301152  403305 main.go:141] libmachine: (force-systemd-env-702842) DBG |     </interface>
	I1014 20:07:16.301180  403305 main.go:141] libmachine: (force-systemd-env-702842) DBG |     <serial type='pty'>
	I1014 20:07:16.301194  403305 main.go:141] libmachine: (force-systemd-env-702842) DBG |       <target type='isa-serial' port='0'>
	I1014 20:07:16.301209  403305 main.go:141] libmachine: (force-systemd-env-702842) DBG |         <model name='isa-serial'/>
	I1014 20:07:16.301220  403305 main.go:141] libmachine: (force-systemd-env-702842) DBG |       </target>
	I1014 20:07:16.301231  403305 main.go:141] libmachine: (force-systemd-env-702842) DBG |     </serial>
	I1014 20:07:16.301248  403305 main.go:141] libmachine: (force-systemd-env-702842) DBG |     <console type='pty'>
	I1014 20:07:16.301266  403305 main.go:141] libmachine: (force-systemd-env-702842) DBG |       <target type='serial' port='0'/>
	I1014 20:07:16.301278  403305 main.go:141] libmachine: (force-systemd-env-702842) DBG |     </console>
	I1014 20:07:16.301290  403305 main.go:141] libmachine: (force-systemd-env-702842) DBG |     <input type='mouse' bus='ps2'/>
	I1014 20:07:16.301306  403305 main.go:141] libmachine: (force-systemd-env-702842) DBG |     <input type='keyboard' bus='ps2'/>
	I1014 20:07:16.301350  403305 main.go:141] libmachine: (force-systemd-env-702842) DBG |     <audio id='1' type='none'/>
	I1014 20:07:16.301367  403305 main.go:141] libmachine: (force-systemd-env-702842) DBG |     <memballoon model='virtio'>
	I1014 20:07:16.301381  403305 main.go:141] libmachine: (force-systemd-env-702842) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
	I1014 20:07:16.301393  403305 main.go:141] libmachine: (force-systemd-env-702842) DBG |     </memballoon>
	I1014 20:07:16.301401  403305 main.go:141] libmachine: (force-systemd-env-702842) DBG |     <rng model='virtio'>
	I1014 20:07:16.301423  403305 main.go:141] libmachine: (force-systemd-env-702842) DBG |       <backend model='random'>/dev/random</backend>
	I1014 20:07:16.301459  403305 main.go:141] libmachine: (force-systemd-env-702842) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
	I1014 20:07:16.301470  403305 main.go:141] libmachine: (force-systemd-env-702842) DBG |     </rng>
	I1014 20:07:16.301481  403305 main.go:141] libmachine: (force-systemd-env-702842) DBG |   </devices>
	I1014 20:07:16.301493  403305 main.go:141] libmachine: (force-systemd-env-702842) DBG | </domain>
	I1014 20:07:16.301504  403305 main.go:141] libmachine: (force-systemd-env-702842) DBG | 
	I1014 20:07:17.716052  403305 main.go:141] libmachine: (force-systemd-env-702842) waiting for domain to start...
	I1014 20:07:17.717659  403305 main.go:141] libmachine: (force-systemd-env-702842) domain is now running
	I1014 20:07:17.717684  403305 main.go:141] libmachine: (force-systemd-env-702842) waiting for IP...
	I1014 20:07:17.718741  403305 main.go:141] libmachine: (force-systemd-env-702842) DBG | domain force-systemd-env-702842 has defined MAC address 52:54:00:58:5f:88 in network mk-force-systemd-env-702842
	I1014 20:07:17.719460  403305 main.go:141] libmachine: (force-systemd-env-702842) DBG | no network interface addresses found for domain force-systemd-env-702842 (source=lease)
	I1014 20:07:17.719500  403305 main.go:141] libmachine: (force-systemd-env-702842) DBG | trying to list again with source=arp
	I1014 20:07:17.719836  403305 main.go:141] libmachine: (force-systemd-env-702842) DBG | unable to find current IP address of domain force-systemd-env-702842 in network mk-force-systemd-env-702842 (interfaces detected: [])
	I1014 20:07:17.719907  403305 main.go:141] libmachine: (force-systemd-env-702842) DBG | I1014 20:07:17.719834  403649 retry.go:31] will retry after 249.704572ms: waiting for domain to come up
	I1014 20:07:17.971748  403305 main.go:141] libmachine: (force-systemd-env-702842) DBG | domain force-systemd-env-702842 has defined MAC address 52:54:00:58:5f:88 in network mk-force-systemd-env-702842
	I1014 20:07:17.972472  403305 main.go:141] libmachine: (force-systemd-env-702842) DBG | no network interface addresses found for domain force-systemd-env-702842 (source=lease)
	I1014 20:07:17.972507  403305 main.go:141] libmachine: (force-systemd-env-702842) DBG | trying to list again with source=arp
	I1014 20:07:17.972857  403305 main.go:141] libmachine: (force-systemd-env-702842) DBG | unable to find current IP address of domain force-systemd-env-702842 in network mk-force-systemd-env-702842 (interfaces detected: [])
	I1014 20:07:17.972882  403305 main.go:141] libmachine: (force-systemd-env-702842) DBG | I1014 20:07:17.972857  403649 retry.go:31] will retry after 322.401913ms: waiting for domain to come up
	I1014 20:07:18.296779  403305 main.go:141] libmachine: (force-systemd-env-702842) DBG | domain force-systemd-env-702842 has defined MAC address 52:54:00:58:5f:88 in network mk-force-systemd-env-702842
	I1014 20:07:18.297512  403305 main.go:141] libmachine: (force-systemd-env-702842) DBG | no network interface addresses found for domain force-systemd-env-702842 (source=lease)
	I1014 20:07:18.297556  403305 main.go:141] libmachine: (force-systemd-env-702842) DBG | trying to list again with source=arp
	I1014 20:07:18.297951  403305 main.go:141] libmachine: (force-systemd-env-702842) DBG | unable to find current IP address of domain force-systemd-env-702842 in network mk-force-systemd-env-702842 (interfaces detected: [])
	I1014 20:07:18.297984  403305 main.go:141] libmachine: (force-systemd-env-702842) DBG | I1014 20:07:18.297910  403649 retry.go:31] will retry after 448.279311ms: waiting for domain to come up
	I1014 20:07:18.748091  403305 main.go:141] libmachine: (force-systemd-env-702842) DBG | domain force-systemd-env-702842 has defined MAC address 52:54:00:58:5f:88 in network mk-force-systemd-env-702842
	I1014 20:07:18.749070  403305 main.go:141] libmachine: (force-systemd-env-702842) DBG | no network interface addresses found for domain force-systemd-env-702842 (source=lease)
	I1014 20:07:18.749094  403305 main.go:141] libmachine: (force-systemd-env-702842) DBG | trying to list again with source=arp
	I1014 20:07:18.749541  403305 main.go:141] libmachine: (force-systemd-env-702842) DBG | unable to find current IP address of domain force-systemd-env-702842 in network mk-force-systemd-env-702842 (interfaces detected: [])
	I1014 20:07:18.749584  403305 main.go:141] libmachine: (force-systemd-env-702842) DBG | I1014 20:07:18.749524  403649 retry.go:31] will retry after 417.667767ms: waiting for domain to come up
	I1014 20:07:19.169822  403305 main.go:141] libmachine: (force-systemd-env-702842) DBG | domain force-systemd-env-702842 has defined MAC address 52:54:00:58:5f:88 in network mk-force-systemd-env-702842
	I1014 20:07:19.170302  403305 main.go:141] libmachine: (force-systemd-env-702842) DBG | no network interface addresses found for domain force-systemd-env-702842 (source=lease)
	I1014 20:07:19.170457  403305 main.go:141] libmachine: (force-systemd-env-702842) DBG | trying to list again with source=arp
	I1014 20:07:19.171031  403305 main.go:141] libmachine: (force-systemd-env-702842) DBG | unable to find current IP address of domain force-systemd-env-702842 in network mk-force-systemd-env-702842 (interfaces detected: [])
	I1014 20:07:19.171213  403305 main.go:141] libmachine: (force-systemd-env-702842) DBG | I1014 20:07:19.171169  403649 retry.go:31] will retry after 752.236848ms: waiting for domain to come up
	I1014 20:07:19.925738  403305 main.go:141] libmachine: (force-systemd-env-702842) DBG | domain force-systemd-env-702842 has defined MAC address 52:54:00:58:5f:88 in network mk-force-systemd-env-702842
	I1014 20:07:19.926789  403305 main.go:141] libmachine: (force-systemd-env-702842) DBG | no network interface addresses found for domain force-systemd-env-702842 (source=lease)
	I1014 20:07:19.926816  403305 main.go:141] libmachine: (force-systemd-env-702842) DBG | trying to list again with source=arp
	I1014 20:07:19.927168  403305 main.go:141] libmachine: (force-systemd-env-702842) DBG | unable to find current IP address of domain force-systemd-env-702842 in network mk-force-systemd-env-702842 (interfaces detected: [])
	I1014 20:07:19.927214  403305 main.go:141] libmachine: (force-systemd-env-702842) DBG | I1014 20:07:19.927142  403649 retry.go:31] will retry after 835.483474ms: waiting for domain to come up
	
	
	==> CRI-O <==
	Oct 14 20:07:22 pause-488160 crio[2570]: time="2025-10-14 20:07:22.023498401Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ffea398a-cc25-4da9-a995-ac8cb8ddb924 name=/runtime.v1.RuntimeService/Version
	Oct 14 20:07:22 pause-488160 crio[2570]: time="2025-10-14 20:07:22.024642254Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=856489cc-c07f-478f-a12d-808c4c39398e name=/runtime.v1.ImageService/ImageFsInfo
	Oct 14 20:07:22 pause-488160 crio[2570]: time="2025-10-14 20:07:22.025080846Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1760472442025055250,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:127412,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=856489cc-c07f-478f-a12d-808c4c39398e name=/runtime.v1.ImageService/ImageFsInfo
	Oct 14 20:07:22 pause-488160 crio[2570]: time="2025-10-14 20:07:22.025741308Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ba5b0cd1-07cc-4d9b-82b0-1bf7cf97b9d9 name=/runtime.v1.RuntimeService/ListContainers
	Oct 14 20:07:22 pause-488160 crio[2570]: time="2025-10-14 20:07:22.025900706Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ba5b0cd1-07cc-4d9b-82b0-1bf7cf97b9d9 name=/runtime.v1.RuntimeService/ListContainers
	Oct 14 20:07:22 pause-488160 crio[2570]: time="2025-10-14 20:07:22.026289309Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4e90ac414d78a1f7ee0f90c04a73687faad5480589d8567f5e6c48bdef43a6a4,PodSandboxId:378ca24845c9c55459da4685657f467f762ceb02ddde3797922c929259c0c5db,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1760472425263111947,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7g2cw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4d4af20d-b366-4ed8-a198-6aff03448749,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f7ba9e8060d98331b2584f3956669dcca5b0268d9d133e109d6795baff25d702,PodSandboxId:9e7190300541f2937ce4f890983738bfdf2d2f16108dc5734b8439784cede27a,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1760472425268026088,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-mkw7n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 36f66181-b789-42ba-8a7f-4d680d697982,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",
\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:00096d625a013fe1fa67e0339e9549ec14f0e749605b186df08a717f499dba2f,PodSandboxId:9e778b6c8388edcada8faa5ed5c6a844f1a4fb42bea804edfa88d05234bbd207,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1760472420714773672,Labels:map[string]string{io.kubernetes.container
.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-488160,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 17bac2de099c8b85a00c8e835ae46407,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5ec14772e0933d1d2be144b5392733a04f3da0591a5c769db30f6a953353409f,PodSandboxId:fe3903e8317b807180207df83a389ea1b25526c10747790e0012ee4ac1c25cdc,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c283373
88d0619538f,State:CONTAINER_RUNNING,CreatedAt:1760472420676673504,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-488160,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2841f507ca0337d51963ec3de35897b9,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:69d4f2f9551e1d08ed65e858819057880db670c17f70d9cbd4a714290c09867c,PodSandboxId:304406956b58f5530f3c92e24307a15a175764555318c78deb0855f6be512929,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotatio
ns:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1760472420664312846,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-488160,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 562e0710eb923a2b69cc36a87e0635c4,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:355aef60e9fa32ea21dd61248eccd4c25c356b1a1a7dfecaac9f92587a38363b,PodSandboxId:0f73f48ed91713353875b2d7f5e7a783c0f497ff51339ad7cb65b8e795187dcd,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Imag
e:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1760472416366880926,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-488160,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83103fec4be4832c85d6356f6f0d2e52,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a7673602dd13a26d88d5b8cc01cb5805084d98654c1181e0ca52246c245d6871,PodSandboxId:9e7190300541f2937ce4f890983738bfdf2d2f16108dc5
734b8439784cede27a,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1760472405527099615,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-mkw7n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 36f66181-b789-42ba-8a7f-4d680d697982,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubern
etes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b1c20d90de825fa8bd5bbbc15a4636af8994f83f87b5182c20f342f198d2347f,PodSandboxId:378ca24845c9c55459da4685657f467f762ceb02ddde3797922c929259c0c5db,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_EXITED,CreatedAt:1760472404539837122,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7g2cw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4d4af20d-b366-4ed8-a198-6aff03448749,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 1,io.
kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eabddd5066982ea30ce9d40af6c1e8f8f30f31bb4d6ead7cdf79191b79047493,PodSandboxId:0f73f48ed91713353875b2d7f5e7a783c0f497ff51339ad7cb65b8e795187dcd,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_EXITED,CreatedAt:1760472404500974816,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-488160,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83103fec4be4832c85d6356f6f0d2e52,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\"
:2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97414ca2b92dc105ff43b42a93d387443e43583ee0b56d5f09cdbbe38538a4d1,PodSandboxId:304406956b58f5530f3c92e24307a15a175764555318c78deb0855f6be512929,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_EXITED,CreatedAt:1760472404479795826,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-488160,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 562e0710eb923a2b69cc36a87e0635c4,},Annotations:map[string]string{io.kubernetes.container.hash:
d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4a64743e6066f645414b054d1b53606b4711f9fa27a608e5aa0d73868276377d,PodSandboxId:fe3903e8317b807180207df83a389ea1b25526c10747790e0012ee4ac1c25cdc,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_EXITED,CreatedAt:1760472404415856217,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-488160,io.kubernetes.pod.namespace:
kube-system,io.kubernetes.pod.uid: 2841f507ca0337d51963ec3de35897b9,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cbfcc79a2721f61b4d433eece506d808e9fafd4dcf77ac2f88cd0f8acd1cb1df,PodSandboxId:9e778b6c8388edcada8faa5ed5c6a844f1a4fb42bea804edfa88d05234bbd207,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_EXITED,CreatedAt:1760472404229860706,Labels:map[string]string{io.kubernetes.containe
r.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-488160,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 17bac2de099c8b85a00c8e835ae46407,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ba5b0cd1-07cc-4d9b-82b0-1bf7cf97b9d9 name=/runtime.v1.RuntimeService/ListContainers
	Oct 14 20:07:22 pause-488160 crio[2570]: time="2025-10-14 20:07:22.031534355Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:&PodSandboxFilter{Id:,State:&PodSandboxStateValue{State:SANDBOX_READY,},LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=68abf049-7204-4255-a1b6-4b5a60d9faee name=/runtime.v1.RuntimeService/ListPodSandbox
	Oct 14 20:07:22 pause-488160 crio[2570]: time="2025-10-14 20:07:22.031715985Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:9e7190300541f2937ce4f890983738bfdf2d2f16108dc5734b8439784cede27a,Metadata:&PodSandboxMetadata{Name:coredns-66bc5c9577-mkw7n,Uid:36f66181-b789-42ba-8a7f-4d680d697982,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1760472404016516202,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-66bc5c9577-mkw7n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 36f66181-b789-42ba-8a7f-4d680d697982,k8s-app: kube-dns,pod-template-hash: 66bc5c9577,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-10-14T20:05:26.918064613Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:fe3903e8317b807180207df83a389ea1b25526c10747790e0012ee4ac1c25cdc,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-pause-488160,Uid:2841f507ca0337d51963ec3de35897b9,Namespace:kub
e-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1760472403806992157,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-pause-488160,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2841f507ca0337d51963ec3de35897b9,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 2841f507ca0337d51963ec3de35897b9,kubernetes.io/config.seen: 2025-10-14T20:05:21.196780081Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:0f73f48ed91713353875b2d7f5e7a783c0f497ff51339ad7cb65b8e795187dcd,Metadata:&PodSandboxMetadata{Name:etcd-pause-488160,Uid:83103fec4be4832c85d6356f6f0d2e52,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1760472403804260842,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-pause-488160,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83103fec4be4832c85d6356f6f0d2e52,tier: cont
rol-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.50.36:2379,kubernetes.io/config.hash: 83103fec4be4832c85d6356f6f0d2e52,kubernetes.io/config.seen: 2025-10-14T20:05:21.196769354Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:378ca24845c9c55459da4685657f467f762ceb02ddde3797922c929259c0c5db,Metadata:&PodSandboxMetadata{Name:kube-proxy-7g2cw,Uid:4d4af20d-b366-4ed8-a198-6aff03448749,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1760472403800125950,Labels:map[string]string{controller-revision-hash: 66486579fc,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-7g2cw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4d4af20d-b366-4ed8-a198-6aff03448749,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-10-14T20:05:26.830422059Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:304406956b58f5530f3c92e24307a15a17
5764555318c78deb0855f6be512929,Metadata:&PodSandboxMetadata{Name:kube-apiserver-pause-488160,Uid:562e0710eb923a2b69cc36a87e0635c4,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1760472403792801514,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-pause-488160,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 562e0710eb923a2b69cc36a87e0635c4,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.50.36:8443,kubernetes.io/config.hash: 562e0710eb923a2b69cc36a87e0635c4,kubernetes.io/config.seen: 2025-10-14T20:05:21.196779019Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:9e778b6c8388edcada8faa5ed5c6a844f1a4fb42bea804edfa88d05234bbd207,Metadata:&PodSandboxMetadata{Name:kube-scheduler-pause-488160,Uid:17bac2de099c8b85a00c8e835ae46407,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1760472403731944950,Label
s:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-pause-488160,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 17bac2de099c8b85a00c8e835ae46407,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 17bac2de099c8b85a00c8e835ae46407,kubernetes.io/config.seen: 2025-10-14T20:05:21.196780918Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=68abf049-7204-4255-a1b6-4b5a60d9faee name=/runtime.v1.RuntimeService/ListPodSandbox
	Oct 14 20:07:22 pause-488160 crio[2570]: time="2025-10-14 20:07:22.032958175Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:&ContainerStateValue{State:CONTAINER_RUNNING,},PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ec8fac68-dbc6-421a-a3f5-b0a969beafc6 name=/runtime.v1.RuntimeService/ListContainers
	Oct 14 20:07:22 pause-488160 crio[2570]: time="2025-10-14 20:07:22.033085710Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ec8fac68-dbc6-421a-a3f5-b0a969beafc6 name=/runtime.v1.RuntimeService/ListContainers
	Oct 14 20:07:22 pause-488160 crio[2570]: time="2025-10-14 20:07:22.033462240Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4e90ac414d78a1f7ee0f90c04a73687faad5480589d8567f5e6c48bdef43a6a4,PodSandboxId:378ca24845c9c55459da4685657f467f762ceb02ddde3797922c929259c0c5db,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1760472425263111947,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7g2cw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4d4af20d-b366-4ed8-a198-6aff03448749,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f7ba9e8060d98331b2584f3956669dcca5b0268d9d133e109d6795baff25d702,PodSandboxId:9e7190300541f2937ce4f890983738bfdf2d2f16108dc5734b8439784cede27a,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1760472425268026088,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-mkw7n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 36f66181-b789-42ba-8a7f-4d680d697982,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",
\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:00096d625a013fe1fa67e0339e9549ec14f0e749605b186df08a717f499dba2f,PodSandboxId:9e778b6c8388edcada8faa5ed5c6a844f1a4fb42bea804edfa88d05234bbd207,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1760472420714773672,Labels:map[string]string{io.kubernetes.container
.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-488160,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 17bac2de099c8b85a00c8e835ae46407,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5ec14772e0933d1d2be144b5392733a04f3da0591a5c769db30f6a953353409f,PodSandboxId:fe3903e8317b807180207df83a389ea1b25526c10747790e0012ee4ac1c25cdc,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c283373
88d0619538f,State:CONTAINER_RUNNING,CreatedAt:1760472420676673504,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-488160,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2841f507ca0337d51963ec3de35897b9,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:69d4f2f9551e1d08ed65e858819057880db670c17f70d9cbd4a714290c09867c,PodSandboxId:304406956b58f5530f3c92e24307a15a175764555318c78deb0855f6be512929,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotatio
ns:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1760472420664312846,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-488160,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 562e0710eb923a2b69cc36a87e0635c4,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:355aef60e9fa32ea21dd61248eccd4c25c356b1a1a7dfecaac9f92587a38363b,PodSandboxId:0f73f48ed91713353875b2d7f5e7a783c0f497ff51339ad7cb65b8e795187dcd,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Imag
e:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1760472416366880926,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-488160,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83103fec4be4832c85d6356f6f0d2e52,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ec8fac68-dbc6-421a-a3f5-b0a969beafc6 name=/runtime.v1.RuntimeService/ListContainers
	Oct 14 20:07:22 pause-488160 crio[2570]: time="2025-10-14 20:07:22.076800746Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=153c6c62-88b1-42fe-bdb6-a3bd82c4338c name=/runtime.v1.RuntimeService/Version
	Oct 14 20:07:22 pause-488160 crio[2570]: time="2025-10-14 20:07:22.076950871Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=153c6c62-88b1-42fe-bdb6-a3bd82c4338c name=/runtime.v1.RuntimeService/Version
	Oct 14 20:07:22 pause-488160 crio[2570]: time="2025-10-14 20:07:22.079349954Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=99ec2de9-aaf1-48a4-9f6b-4c4ee81e3e25 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 14 20:07:22 pause-488160 crio[2570]: time="2025-10-14 20:07:22.080075103Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1760472442080050908,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:127412,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=99ec2de9-aaf1-48a4-9f6b-4c4ee81e3e25 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 14 20:07:22 pause-488160 crio[2570]: time="2025-10-14 20:07:22.080608164Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ee488982-9c14-4b51-a09b-287503331f2f name=/runtime.v1.RuntimeService/ListContainers
	Oct 14 20:07:22 pause-488160 crio[2570]: time="2025-10-14 20:07:22.080890810Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ee488982-9c14-4b51-a09b-287503331f2f name=/runtime.v1.RuntimeService/ListContainers
	Oct 14 20:07:22 pause-488160 crio[2570]: time="2025-10-14 20:07:22.081641425Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4e90ac414d78a1f7ee0f90c04a73687faad5480589d8567f5e6c48bdef43a6a4,PodSandboxId:378ca24845c9c55459da4685657f467f762ceb02ddde3797922c929259c0c5db,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1760472425263111947,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7g2cw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4d4af20d-b366-4ed8-a198-6aff03448749,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f7ba9e8060d98331b2584f3956669dcca5b0268d9d133e109d6795baff25d702,PodSandboxId:9e7190300541f2937ce4f890983738bfdf2d2f16108dc5734b8439784cede27a,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1760472425268026088,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-mkw7n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 36f66181-b789-42ba-8a7f-4d680d697982,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",
\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:00096d625a013fe1fa67e0339e9549ec14f0e749605b186df08a717f499dba2f,PodSandboxId:9e778b6c8388edcada8faa5ed5c6a844f1a4fb42bea804edfa88d05234bbd207,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1760472420714773672,Labels:map[string]string{io.kubernetes.container
.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-488160,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 17bac2de099c8b85a00c8e835ae46407,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5ec14772e0933d1d2be144b5392733a04f3da0591a5c769db30f6a953353409f,PodSandboxId:fe3903e8317b807180207df83a389ea1b25526c10747790e0012ee4ac1c25cdc,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c283373
88d0619538f,State:CONTAINER_RUNNING,CreatedAt:1760472420676673504,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-488160,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2841f507ca0337d51963ec3de35897b9,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:69d4f2f9551e1d08ed65e858819057880db670c17f70d9cbd4a714290c09867c,PodSandboxId:304406956b58f5530f3c92e24307a15a175764555318c78deb0855f6be512929,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotatio
ns:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1760472420664312846,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-488160,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 562e0710eb923a2b69cc36a87e0635c4,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:355aef60e9fa32ea21dd61248eccd4c25c356b1a1a7dfecaac9f92587a38363b,PodSandboxId:0f73f48ed91713353875b2d7f5e7a783c0f497ff51339ad7cb65b8e795187dcd,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Imag
e:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1760472416366880926,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-488160,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83103fec4be4832c85d6356f6f0d2e52,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a7673602dd13a26d88d5b8cc01cb5805084d98654c1181e0ca52246c245d6871,PodSandboxId:9e7190300541f2937ce4f890983738bfdf2d2f16108dc5
734b8439784cede27a,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1760472405527099615,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-mkw7n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 36f66181-b789-42ba-8a7f-4d680d697982,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubern
etes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b1c20d90de825fa8bd5bbbc15a4636af8994f83f87b5182c20f342f198d2347f,PodSandboxId:378ca24845c9c55459da4685657f467f762ceb02ddde3797922c929259c0c5db,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_EXITED,CreatedAt:1760472404539837122,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7g2cw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4d4af20d-b366-4ed8-a198-6aff03448749,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 1,io.
kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eabddd5066982ea30ce9d40af6c1e8f8f30f31bb4d6ead7cdf79191b79047493,PodSandboxId:0f73f48ed91713353875b2d7f5e7a783c0f497ff51339ad7cb65b8e795187dcd,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_EXITED,CreatedAt:1760472404500974816,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-488160,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83103fec4be4832c85d6356f6f0d2e52,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\"
:2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97414ca2b92dc105ff43b42a93d387443e43583ee0b56d5f09cdbbe38538a4d1,PodSandboxId:304406956b58f5530f3c92e24307a15a175764555318c78deb0855f6be512929,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_EXITED,CreatedAt:1760472404479795826,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-488160,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 562e0710eb923a2b69cc36a87e0635c4,},Annotations:map[string]string{io.kubernetes.container.hash:
d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4a64743e6066f645414b054d1b53606b4711f9fa27a608e5aa0d73868276377d,PodSandboxId:fe3903e8317b807180207df83a389ea1b25526c10747790e0012ee4ac1c25cdc,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_EXITED,CreatedAt:1760472404415856217,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-488160,io.kubernetes.pod.namespace:
kube-system,io.kubernetes.pod.uid: 2841f507ca0337d51963ec3de35897b9,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cbfcc79a2721f61b4d433eece506d808e9fafd4dcf77ac2f88cd0f8acd1cb1df,PodSandboxId:9e778b6c8388edcada8faa5ed5c6a844f1a4fb42bea804edfa88d05234bbd207,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_EXITED,CreatedAt:1760472404229860706,Labels:map[string]string{io.kubernetes.containe
r.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-488160,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 17bac2de099c8b85a00c8e835ae46407,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ee488982-9c14-4b51-a09b-287503331f2f name=/runtime.v1.RuntimeService/ListContainers
	Oct 14 20:07:22 pause-488160 crio[2570]: time="2025-10-14 20:07:22.134255953Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=b9dd2a4b-acbe-429f-bb51-9eaff160ec0f name=/runtime.v1.RuntimeService/Version
	Oct 14 20:07:22 pause-488160 crio[2570]: time="2025-10-14 20:07:22.134358937Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=b9dd2a4b-acbe-429f-bb51-9eaff160ec0f name=/runtime.v1.RuntimeService/Version
	Oct 14 20:07:22 pause-488160 crio[2570]: time="2025-10-14 20:07:22.136177831Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=93f09664-79ef-4cb3-81cf-97fcaa75c67b name=/runtime.v1.ImageService/ImageFsInfo
	Oct 14 20:07:22 pause-488160 crio[2570]: time="2025-10-14 20:07:22.136742819Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1760472442136716548,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:127412,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=93f09664-79ef-4cb3-81cf-97fcaa75c67b name=/runtime.v1.ImageService/ImageFsInfo
	Oct 14 20:07:22 pause-488160 crio[2570]: time="2025-10-14 20:07:22.137643664Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=40f7e9a9-9a8d-4a23-931b-4216523c8524 name=/runtime.v1.RuntimeService/ListContainers
	Oct 14 20:07:22 pause-488160 crio[2570]: time="2025-10-14 20:07:22.137720290Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=40f7e9a9-9a8d-4a23-931b-4216523c8524 name=/runtime.v1.RuntimeService/ListContainers
	Oct 14 20:07:22 pause-488160 crio[2570]: time="2025-10-14 20:07:22.138120937Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4e90ac414d78a1f7ee0f90c04a73687faad5480589d8567f5e6c48bdef43a6a4,PodSandboxId:378ca24845c9c55459da4685657f467f762ceb02ddde3797922c929259c0c5db,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1760472425263111947,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7g2cw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4d4af20d-b366-4ed8-a198-6aff03448749,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f7ba9e8060d98331b2584f3956669dcca5b0268d9d133e109d6795baff25d702,PodSandboxId:9e7190300541f2937ce4f890983738bfdf2d2f16108dc5734b8439784cede27a,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1760472425268026088,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-mkw7n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 36f66181-b789-42ba-8a7f-4d680d697982,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",
\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:00096d625a013fe1fa67e0339e9549ec14f0e749605b186df08a717f499dba2f,PodSandboxId:9e778b6c8388edcada8faa5ed5c6a844f1a4fb42bea804edfa88d05234bbd207,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1760472420714773672,Labels:map[string]string{io.kubernetes.container
.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-488160,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 17bac2de099c8b85a00c8e835ae46407,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5ec14772e0933d1d2be144b5392733a04f3da0591a5c769db30f6a953353409f,PodSandboxId:fe3903e8317b807180207df83a389ea1b25526c10747790e0012ee4ac1c25cdc,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c283373
88d0619538f,State:CONTAINER_RUNNING,CreatedAt:1760472420676673504,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-488160,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2841f507ca0337d51963ec3de35897b9,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:69d4f2f9551e1d08ed65e858819057880db670c17f70d9cbd4a714290c09867c,PodSandboxId:304406956b58f5530f3c92e24307a15a175764555318c78deb0855f6be512929,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotatio
ns:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1760472420664312846,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-488160,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 562e0710eb923a2b69cc36a87e0635c4,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:355aef60e9fa32ea21dd61248eccd4c25c356b1a1a7dfecaac9f92587a38363b,PodSandboxId:0f73f48ed91713353875b2d7f5e7a783c0f497ff51339ad7cb65b8e795187dcd,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Imag
e:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1760472416366880926,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-488160,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83103fec4be4832c85d6356f6f0d2e52,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a7673602dd13a26d88d5b8cc01cb5805084d98654c1181e0ca52246c245d6871,PodSandboxId:9e7190300541f2937ce4f890983738bfdf2d2f16108dc5
734b8439784cede27a,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1760472405527099615,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-mkw7n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 36f66181-b789-42ba-8a7f-4d680d697982,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubern
etes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b1c20d90de825fa8bd5bbbc15a4636af8994f83f87b5182c20f342f198d2347f,PodSandboxId:378ca24845c9c55459da4685657f467f762ceb02ddde3797922c929259c0c5db,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_EXITED,CreatedAt:1760472404539837122,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7g2cw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4d4af20d-b366-4ed8-a198-6aff03448749,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 1,io.
kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eabddd5066982ea30ce9d40af6c1e8f8f30f31bb4d6ead7cdf79191b79047493,PodSandboxId:0f73f48ed91713353875b2d7f5e7a783c0f497ff51339ad7cb65b8e795187dcd,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_EXITED,CreatedAt:1760472404500974816,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-488160,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83103fec4be4832c85d6356f6f0d2e52,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\"
:2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97414ca2b92dc105ff43b42a93d387443e43583ee0b56d5f09cdbbe38538a4d1,PodSandboxId:304406956b58f5530f3c92e24307a15a175764555318c78deb0855f6be512929,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_EXITED,CreatedAt:1760472404479795826,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-488160,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 562e0710eb923a2b69cc36a87e0635c4,},Annotations:map[string]string{io.kubernetes.container.hash:
d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4a64743e6066f645414b054d1b53606b4711f9fa27a608e5aa0d73868276377d,PodSandboxId:fe3903e8317b807180207df83a389ea1b25526c10747790e0012ee4ac1c25cdc,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_EXITED,CreatedAt:1760472404415856217,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-488160,io.kubernetes.pod.namespace:
kube-system,io.kubernetes.pod.uid: 2841f507ca0337d51963ec3de35897b9,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cbfcc79a2721f61b4d433eece506d808e9fafd4dcf77ac2f88cd0f8acd1cb1df,PodSandboxId:9e778b6c8388edcada8faa5ed5c6a844f1a4fb42bea804edfa88d05234bbd207,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_EXITED,CreatedAt:1760472404229860706,Labels:map[string]string{io.kubernetes.containe
r.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-488160,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 17bac2de099c8b85a00c8e835ae46407,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=40f7e9a9-9a8d-4a23-931b-4216523c8524 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	f7ba9e8060d98       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969   16 seconds ago      Running             coredns                   2                   9e7190300541f       coredns-66bc5c9577-mkw7n
	4e90ac414d78a       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7   16 seconds ago      Running             kube-proxy                2                   378ca24845c9c       kube-proxy-7g2cw
	00096d625a013       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813   21 seconds ago      Running             kube-scheduler            2                   9e778b6c8388e       kube-scheduler-pause-488160
	5ec14772e0933       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f   21 seconds ago      Running             kube-controller-manager   2                   fe3903e8317b8       kube-controller-manager-pause-488160
	69d4f2f9551e1       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97   21 seconds ago      Running             kube-apiserver            2                   304406956b58f       kube-apiserver-pause-488160
	355aef60e9fa3       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115   25 seconds ago      Running             etcd                      2                   0f73f48ed9171       etcd-pause-488160
	a7673602dd13a       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969   36 seconds ago      Exited              coredns                   1                   9e7190300541f       coredns-66bc5c9577-mkw7n
	b1c20d90de825       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7   37 seconds ago      Exited              kube-proxy                1                   378ca24845c9c       kube-proxy-7g2cw
	eabddd5066982       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115   37 seconds ago      Exited              etcd                      1                   0f73f48ed9171       etcd-pause-488160
	97414ca2b92dc       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97   37 seconds ago      Exited              kube-apiserver            1                   304406956b58f       kube-apiserver-pause-488160
	4a64743e6066f       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f   37 seconds ago      Exited              kube-controller-manager   1                   fe3903e8317b8       kube-controller-manager-pause-488160
	cbfcc79a2721f       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813   37 seconds ago      Exited              kube-scheduler            1                   9e778b6c8388e       kube-scheduler-pause-488160
	
	
	==> coredns [a7673602dd13a26d88d5b8cc01cb5805084d98654c1181e0ca52246c245d6871] <==
	
	
	==> coredns [f7ba9e8060d98331b2584f3956669dcca5b0268d9d133e109d6795baff25d702] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6e77f21cd6946b87ec86c565e2060aa5d23c02882cb22fd7a321b5e8cd0c8bdafe21968fcff406405707b988b753da21ecd190fe02329f1b569bfa74920cc0fa
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:47345 - 2004 "HINFO IN 1534063794210058555.795810395313342923. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.027727208s
	
	
	==> describe nodes <==
	Name:               pause-488160
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-488160
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=93e78e130b0f4e4236c23940daf4ba8b68c76662
	                    minikube.k8s.io/name=pause-488160
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_14T20_05_21_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 14 Oct 2025 20:05:17 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-488160
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 14 Oct 2025 20:07:14 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 14 Oct 2025 20:07:04 +0000   Tue, 14 Oct 2025 20:05:15 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 14 Oct 2025 20:07:04 +0000   Tue, 14 Oct 2025 20:05:15 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 14 Oct 2025 20:07:04 +0000   Tue, 14 Oct 2025 20:05:15 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 14 Oct 2025 20:07:04 +0000   Tue, 14 Oct 2025 20:05:22 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.36
	  Hostname:    pause-488160
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3042712Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3042712Ki
	  pods:               110
	System Info:
	  Machine ID:                 3c7caad6c2ea4bdba06f0d07a6cc85da
	  System UUID:                3c7caad6-c2ea-4bdb-a06f-0d07a6cc85da
	  Boot ID:                    6cf9d609-b86f-4f06-85a5-86f036ece3e6
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-mkw7n                100m (5%)     0 (0%)      70Mi (2%)        170Mi (5%)     116s
	  kube-system                 etcd-pause-488160                       100m (5%)     0 (0%)      100Mi (3%)       0 (0%)         2m1s
	  kube-system                 kube-apiserver-pause-488160             250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m1s
	  kube-system                 kube-controller-manager-pause-488160    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m3s
	  kube-system                 kube-proxy-7g2cw                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         116s
	  kube-system                 kube-scheduler-pause-488160             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m1s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (5%)  170Mi (5%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 114s                 kube-proxy       
	  Normal  Starting                 15s                  kube-proxy       
	  Normal  Starting                 2m9s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m8s (x8 over 2m9s)  kubelet          Node pause-488160 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m8s (x8 over 2m9s)  kubelet          Node pause-488160 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m8s (x7 over 2m9s)  kubelet          Node pause-488160 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m8s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    2m1s                 kubelet          Node pause-488160 status is now: NodeHasNoDiskPressure
	  Normal  NodeAllocatableEnforced  2m1s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  2m1s                 kubelet          Node pause-488160 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     2m1s                 kubelet          Node pause-488160 status is now: NodeHasSufficientPID
	  Normal  Starting                 2m1s                 kubelet          Starting kubelet.
	  Normal  NodeReady                2m                   kubelet          Node pause-488160 status is now: NodeReady
	  Normal  RegisteredNode           117s                 node-controller  Node pause-488160 event: Registered Node pause-488160 in Controller
	  Normal  Starting                 23s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  22s (x8 over 22s)    kubelet          Node pause-488160 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    22s (x8 over 22s)    kubelet          Node pause-488160 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     22s (x7 over 22s)    kubelet          Node pause-488160 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  22s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           14s                  node-controller  Node pause-488160 event: Registered Node pause-488160 in Controller
	
	
	==> dmesg <==
	[Oct14 20:04] Booted with the nomodeset parameter. Only the system framebuffer will be available
	[  +0.000006] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.000062] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +0.002382] (rpcbind)[119]: rpcbind.service: Referenced but unset environment variable evaluates to an empty string: RPCBIND_OPTIONS
	[  +1.207091] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000017] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Oct14 20:05] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.123631] kauditd_printk_skb: 74 callbacks suppressed
	[  +0.637671] kauditd_printk_skb: 46 callbacks suppressed
	[  +0.148082] kauditd_printk_skb: 143 callbacks suppressed
	[  +1.259113] kauditd_printk_skb: 18 callbacks suppressed
	[Oct14 20:06] kauditd_printk_skb: 190 callbacks suppressed
	[  +2.744135] kauditd_printk_skb: 319 callbacks suppressed
	[Oct14 20:07] kauditd_printk_skb: 81 callbacks suppressed
	[  +9.551782] kauditd_printk_skb: 32 callbacks suppressed
	
	
	==> etcd [355aef60e9fa32ea21dd61248eccd4c25c356b1a1a7dfecaac9f92587a38363b] <==
	{"level":"warn","ts":"2025-10-14T20:07:06.750858Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"416.36827ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-14T20:07:06.750946Z","caller":"traceutil/trace.go:172","msg":"trace[1945244499] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:453; }","duration":"416.479108ms","start":"2025-10-14T20:07:06.334450Z","end":"2025-10-14T20:07:06.750929Z","steps":["trace[1945244499] 'agreement among raft nodes before linearized reading'  (duration: 201.635819ms)","trace[1945244499] 'range keys from in-memory index tree'  (duration: 214.711277ms)"],"step_count":2}
	{"level":"warn","ts":"2025-10-14T20:07:06.750984Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-10-14T20:07:06.334430Z","time spent":"416.544657ms","remote":"127.0.0.1:53556","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":28,"request content":"key:\"/registry/pods\" limit:1 "}
	{"level":"warn","ts":"2025-10-14T20:07:06.753001Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"215.190356ms","expected-duration":"100ms","prefix":"","request":"header:<ID:11334885043123316792 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/masterleases/192.168.50.36\" mod_revision:0 > success:<request_put:<key:\"/registry/masterleases/192.168.50.36\" value_size:66 lease:2111513006268540982 >> failure:<request_range:<key:\"/registry/masterleases/192.168.50.36\" > >>","response":"size:16"}
	{"level":"info","ts":"2025-10-14T20:07:06.753390Z","caller":"traceutil/trace.go:172","msg":"trace[1060290842] linearizableReadLoop","detail":"{readStateIndex:488; appliedIndex:487; }","duration":"217.236086ms","start":"2025-10-14T20:07:06.536061Z","end":"2025-10-14T20:07:06.753297Z","steps":["trace[1060290842] 'read index received'  (duration: 14.611µs)","trace[1060290842] 'applied index is now lower than readState.Index'  (duration: 217.220609ms)"],"step_count":2}
	{"level":"warn","ts":"2025-10-14T20:07:06.753905Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"267.187722ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/secrets\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-14T20:07:06.754012Z","caller":"traceutil/trace.go:172","msg":"trace[1674341272] range","detail":"{range_begin:/registry/secrets; range_end:; response_count:0; response_revision:454; }","duration":"267.392682ms","start":"2025-10-14T20:07:06.486557Z","end":"2025-10-14T20:07:06.753949Z","steps":["trace[1674341272] 'agreement among raft nodes before linearized reading'  (duration: 267.168491ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-14T20:07:06.755683Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"270.574982ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/persistent-volume-binder\" limit:1 ","response":"range_response_count:1 size:214"}
	{"level":"info","ts":"2025-10-14T20:07:06.755968Z","caller":"traceutil/trace.go:172","msg":"trace[457255671] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/persistent-volume-binder; range_end:; response_count:1; response_revision:454; }","duration":"270.857689ms","start":"2025-10-14T20:07:06.485098Z","end":"2025-10-14T20:07:06.755956Z","steps":["trace[457255671] 'agreement among raft nodes before linearized reading'  (duration: 270.343497ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-14T20:07:06.757221Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"266.249407ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-14T20:07:06.764645Z","caller":"traceutil/trace.go:172","msg":"trace[454865915] range","detail":"{range_begin:/registry/serviceaccounts; range_end:; response_count:0; response_revision:454; }","duration":"277.390789ms","start":"2025-10-14T20:07:06.487237Z","end":"2025-10-14T20:07:06.764628Z","steps":["trace[454865915] 'agreement among raft nodes before linearized reading'  (duration: 266.216103ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-14T20:07:06.764523Z","caller":"traceutil/trace.go:172","msg":"trace[579126946] transaction","detail":"{read_only:false; response_revision:454; number_of_response:1; }","duration":"776.306979ms","start":"2025-10-14T20:07:05.987952Z","end":"2025-10-14T20:07:06.764259Z","steps":["trace[579126946] 'process raft request'  (duration: 548.160958ms)","trace[579126946] 'compare'  (duration: 214.717429ms)"],"step_count":2}
	{"level":"warn","ts":"2025-10-14T20:07:06.770670Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-10-14T20:07:05.987929Z","time spent":"782.679207ms","remote":"127.0.0.1:53276","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":118,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/masterleases/192.168.50.36\" mod_revision:0 > success:<request_put:<key:\"/registry/masterleases/192.168.50.36\" value_size:66 lease:2111513006268540982 >> failure:<request_range:<key:\"/registry/masterleases/192.168.50.36\" > >"}
	{"level":"warn","ts":"2025-10-14T20:07:07.304669Z","caller":"etcdserver/v3_server.go:911","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":11334885043123316802,"retry-timeout":"500ms"}
	{"level":"info","ts":"2025-10-14T20:07:07.315323Z","caller":"traceutil/trace.go:172","msg":"trace[939364981] linearizableReadLoop","detail":"{readStateIndex:488; appliedIndex:488; }","duration":"511.340857ms","start":"2025-10-14T20:07:06.803880Z","end":"2025-10-14T20:07:07.315221Z","steps":["trace[939364981] 'read index received'  (duration: 511.33371ms)","trace[939364981] 'applied index is now lower than readState.Index'  (duration: 6.162µs)"],"step_count":2}
	{"level":"warn","ts":"2025-10-14T20:07:07.318806Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"514.907222ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/namespace-controller\" limit:1 ","response":"range_response_count:1 size:205"}
	{"level":"info","ts":"2025-10-14T20:07:07.318860Z","caller":"traceutil/trace.go:172","msg":"trace[613297594] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/namespace-controller; range_end:; response_count:1; response_revision:454; }","duration":"514.973846ms","start":"2025-10-14T20:07:06.803875Z","end":"2025-10-14T20:07:07.318849Z","steps":["trace[613297594] 'agreement among raft nodes before linearized reading'  (duration: 511.484249ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-14T20:07:07.318894Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-10-14T20:07:06.803855Z","time spent":"515.027557ms","remote":"127.0.0.1:53588","response type":"/etcdserverpb.KV/Range","request count":0,"request size":62,"response count":1,"response size":228,"request content":"key:\"/registry/serviceaccounts/kube-system/namespace-controller\" limit:1 "}
	{"level":"warn","ts":"2025-10-14T20:07:07.319123Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"185.834755ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 serializable:true keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-14T20:07:07.319207Z","caller":"traceutil/trace.go:172","msg":"trace[437794405] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:454; }","duration":"185.918638ms","start":"2025-10-14T20:07:07.133278Z","end":"2025-10-14T20:07:07.319197Z","steps":["trace[437794405] 'range keys from in-memory index tree'  (duration: 185.766869ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-14T20:07:07.319714Z","caller":"traceutil/trace.go:172","msg":"trace[1509350123] transaction","detail":"{read_only:false; number_of_response:0; response_revision:455; }","duration":"445.745167ms","start":"2025-10-14T20:07:06.873961Z","end":"2025-10-14T20:07:07.319706Z","steps":["trace[1509350123] 'process raft request'  (duration: 445.7164ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-14T20:07:07.319775Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-10-14T20:07:06.873890Z","time spent":"445.850169ms","remote":"127.0.0.1:53956","response type":"/etcdserverpb.KV/Txn","request count":0,"request size":0,"response count":0,"response size":28,"request content":"compare:<target:MOD key:\"/registry/clusterrolebindings/kubeadm:cluster-admins\" mod_revision:0 > success:<request_put:<key:\"/registry/clusterrolebindings/kubeadm:cluster-admins\" value_size:375 >> failure:<>"}
	{"level":"info","ts":"2025-10-14T20:07:07.321003Z","caller":"traceutil/trace.go:172","msg":"trace[889367425] transaction","detail":"{read_only:false; response_revision:455; number_of_response:1; }","duration":"521.330571ms","start":"2025-10-14T20:07:06.799660Z","end":"2025-10-14T20:07:07.320991Z","steps":["trace[889367425] 'process raft request'  (duration: 515.592218ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-14T20:07:07.321349Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-10-14T20:07:06.799591Z","time spent":"521.662073ms","remote":"127.0.0.1:53556","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":6057,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/pods/kube-system/etcd-pause-488160\" mod_revision:449 > success:<request_put:<key:\"/registry/pods/kube-system/etcd-pause-488160\" value_size:6005 >> failure:<request_range:<key:\"/registry/pods/kube-system/etcd-pause-488160\" > >"}
	{"level":"warn","ts":"2025-10-14T20:07:07.321481Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-10-14T20:07:06.840455Z","time spent":"481.024294ms","remote":"127.0.0.1:54308","response type":"/etcdserverpb.Lease/LeaseGrant","request count":-1,"request size":-1,"response count":-1,"response size":-1,"request content":""}
	
	
	==> etcd [eabddd5066982ea30ce9d40af6c1e8f8f30f31bb4d6ead7cdf79191b79047493] <==
	{"level":"info","ts":"2025-10-14T20:06:45.947006Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-10-14T20:06:45.953979Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-10-14T20:06:45.954264Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"warn","ts":"2025-10-14T20:06:45.954938Z","caller":"v3rpc/grpc.go:52","msg":"etcdserver: failed to register grpc metrics","error":"duplicate metrics collector registration attempted"}
	{"level":"info","ts":"2025-10-14T20:06:45.974444Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-10-14T20:06:45.999281Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-10-14T20:06:46.019385Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.50.36:2379"}
	{"level":"info","ts":"2025-10-14T20:06:46.312266Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-10-14T20:06:46.312355Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"pause-488160","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.50.36:2380"],"advertise-client-urls":["https://192.168.50.36:2379"]}
	{"level":"error","ts":"2025-10-14T20:06:46.312445Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-10-14T20:06:46.312529Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"warn","ts":"2025-10-14T20:06:46.315332Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-10-14T20:06:46.315388Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-10-14T20:06:46.315410Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"error","ts":"2025-10-14T20:06:46.315461Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-10-14T20:06:46.318258Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.50.36:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-10-14T20:06:46.318371Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.50.36:2379: use of closed network connection"}
	{"level":"error","ts":"2025-10-14T20:06:46.318397Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.50.36:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-14T20:06:46.318257Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"e5487579cc149d4d","current-leader-member-id":"e5487579cc149d4d"}
	{"level":"info","ts":"2025-10-14T20:06:46.318484Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"info","ts":"2025-10-14T20:06:46.318493Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"info","ts":"2025-10-14T20:06:46.331584Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.50.36:2380"}
	{"level":"error","ts":"2025-10-14T20:06:46.331674Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.50.36:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-14T20:06:46.331740Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.50.36:2380"}
	{"level":"info","ts":"2025-10-14T20:06:46.331773Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"pause-488160","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.50.36:2380"],"advertise-client-urls":["https://192.168.50.36:2379"]}
	
	
	==> kernel <==
	 20:07:22 up 2 min,  0 users,  load average: 1.44, 0.59, 0.23
	Linux pause-488160 6.6.95 #1 SMP PREEMPT_DYNAMIC Thu Sep 18 15:48:18 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [69d4f2f9551e1d08ed65e858819057880db670c17f70d9cbd4a714290c09867c] <==
	I1014 20:07:04.545722       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1014 20:07:04.545918       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1014 20:07:04.547729       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1014 20:07:04.547782       1 aggregator.go:171] initial CRD sync complete...
	I1014 20:07:04.547794       1 autoregister_controller.go:144] Starting autoregister controller
	I1014 20:07:04.547802       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1014 20:07:04.547806       1 cache.go:39] Caches are synced for autoregister controller
	I1014 20:07:04.548415       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1014 20:07:04.548508       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1014 20:07:04.556631       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1014 20:07:04.556745       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	E1014 20:07:04.556952       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1014 20:07:04.577307       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1014 20:07:04.577348       1 policy_source.go:240] refreshing policies
	I1014 20:07:04.588808       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1014 20:07:04.593406       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1014 20:07:05.088682       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1014 20:07:05.349215       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1014 20:07:07.541676       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1014 20:07:07.616755       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1014 20:07:07.649237       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1014 20:07:07.656817       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1014 20:07:08.850044       1 controller.go:667] quota admission added evaluator for: endpoints
	I1014 20:07:08.902454       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1014 20:07:08.950609       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-apiserver [97414ca2b92dc105ff43b42a93d387443e43583ee0b56d5f09cdbbe38538a4d1] <==
	W1014 20:06:46.521412       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=certificates.k8s.io/v1alpha1
	W1014 20:06:46.521428       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=node.k8s.io/v1alpha1
	W1014 20:06:46.570252       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1014 20:06:46.570433       1 logging.go:55] [core] [Channel #2 SubChannel #5]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	I1014 20:06:46.572194       1 shared_informer.go:349] "Waiting for caches to sync" controller="node_authorizer"
	W1014 20:06:46.573309       1 logging.go:55] [core] [Channel #2 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1014 20:06:46.573496       1 logging.go:55] [core] [Channel #1 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	I1014 20:06:46.579676       1 shared_informer.go:349] "Waiting for caches to sync" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1014 20:06:46.597566       1 plugins.go:157] Loaded 14 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,PodTopologyLabels,MutatingAdmissionPolicy,MutatingAdmissionWebhook.
	I1014 20:06:46.597622       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I1014 20:06:46.598037       1 instance.go:239] Using reconciler: lease
	W1014 20:06:46.608215       1 logging.go:55] [core] [Channel #7 SubChannel #8]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1014 20:06:46.608937       1 logging.go:55] [core] [Channel #7 SubChannel #9]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1014 20:06:47.574813       1 logging.go:55] [core] [Channel #1 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1014 20:06:47.574932       1 logging.go:55] [core] [Channel #2 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1014 20:06:47.610215       1 logging.go:55] [core] [Channel #7 SubChannel #9]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1014 20:06:49.056612       1 logging.go:55] [core] [Channel #1 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1014 20:06:49.149015       1 logging.go:55] [core] [Channel #7 SubChannel #9]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1014 20:06:49.268528       1 logging.go:55] [core] [Channel #2 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1014 20:06:51.437492       1 logging.go:55] [core] [Channel #2 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1014 20:06:51.663597       1 logging.go:55] [core] [Channel #7 SubChannel #9]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1014 20:06:51.785024       1 logging.go:55] [core] [Channel #1 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1014 20:06:55.376479       1 logging.go:55] [core] [Channel #7 SubChannel #9]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1014 20:06:55.619520       1 logging.go:55] [core] [Channel #2 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1014 20:06:56.325534       1 logging.go:55] [core] [Channel #1 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [4a64743e6066f645414b054d1b53606b4711f9fa27a608e5aa0d73868276377d] <==
	I1014 20:06:46.642116       1 serving.go:386] Generated self-signed cert in-memory
	I1014 20:06:47.382064       1 controllermanager.go:191] "Starting" version="v1.34.1"
	I1014 20:06:47.382212       1 controllermanager.go:193] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1014 20:06:47.383897       1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1014 20:06:47.384073       1 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1014 20:06:47.384623       1 secure_serving.go:211] Serving securely on 127.0.0.1:10257
	I1014 20:06:47.384743       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	
	
	==> kube-controller-manager [5ec14772e0933d1d2be144b5392733a04f3da0591a5c769db30f6a953353409f] <==
	I1014 20:07:08.584358       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1014 20:07:08.584634       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1014 20:07:08.588469       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1014 20:07:08.592731       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1014 20:07:08.595251       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1014 20:07:08.597382       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1014 20:07:08.597736       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1014 20:07:08.597828       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1014 20:07:08.597879       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1014 20:07:08.597947       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1014 20:07:08.597996       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1014 20:07:08.598049       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1014 20:07:08.599285       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1014 20:07:08.599441       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1014 20:07:08.601427       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1014 20:07:08.606396       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1014 20:07:08.606882       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1014 20:07:08.608600       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1014 20:07:08.612640       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1014 20:07:08.612819       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1014 20:07:08.618606       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1014 20:07:08.618830       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1014 20:07:08.664434       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1014 20:07:08.664552       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1014 20:07:08.664576       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	
	
	==> kube-proxy [4e90ac414d78a1f7ee0f90c04a73687faad5480589d8567f5e6c48bdef43a6a4] <==
	I1014 20:07:06.635376       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1014 20:07:06.736001       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1014 20:07:06.736045       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.50.36"]
	E1014 20:07:06.736244       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1014 20:07:06.793950       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1014 20:07:06.794392       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1014 20:07:06.794737       1 server_linux.go:132] "Using iptables Proxier"
	I1014 20:07:06.817942       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1014 20:07:06.818847       1 server.go:527] "Version info" version="v1.34.1"
	I1014 20:07:06.818959       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1014 20:07:06.825892       1 config.go:200] "Starting service config controller"
	I1014 20:07:06.825925       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1014 20:07:06.825947       1 config.go:106] "Starting endpoint slice config controller"
	I1014 20:07:06.825952       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1014 20:07:06.825966       1 config.go:403] "Starting serviceCIDR config controller"
	I1014 20:07:06.825971       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1014 20:07:06.826648       1 config.go:309] "Starting node config controller"
	I1014 20:07:06.826681       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1014 20:07:06.826690       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1014 20:07:06.926670       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1014 20:07:06.926704       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1014 20:07:06.926722       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-proxy [b1c20d90de825fa8bd5bbbc15a4636af8994f83f87b5182c20f342f198d2347f] <==
	I1014 20:06:46.050626       1 server_linux.go:53] "Using iptables proxy"
	
	
	==> kube-scheduler [00096d625a013fe1fa67e0339e9549ec14f0e749605b186df08a717f499dba2f] <==
	I1014 20:07:02.983636       1 serving.go:386] Generated self-signed cert in-memory
	I1014 20:07:04.610584       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1014 20:07:04.610627       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1014 20:07:04.617510       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1014 20:07:04.617590       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1014 20:07:04.618464       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1014 20:07:04.617572       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1014 20:07:04.618583       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1014 20:07:04.617635       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1014 20:07:04.620402       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1014 20:07:04.617647       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1014 20:07:04.718726       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1014 20:07:04.718726       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1014 20:07:04.721290       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	
	
	==> kube-scheduler [cbfcc79a2721f61b4d433eece506d808e9fafd4dcf77ac2f88cd0f8acd1cb1df] <==
	I1014 20:06:46.814744       1 serving.go:386] Generated self-signed cert in-memory
	W1014 20:06:57.525605       1 authentication.go:397] Error looking up in-cluster authentication configuration: Get "https://192.168.50.36:8443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp 192.168.50.36:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.50.36:41816->192.168.50.36:8443: read: connection reset by peer
	W1014 20:06:57.525668       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1014 20:06:57.525677       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1014 20:06:57.536980       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1014 20:06:57.537018       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	E1014 20:06:57.537033       1 event.go:401] "Unable start event watcher (will not retry!)" err="broadcaster already stopped"
	I1014 20:06:57.538985       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1014 20:06:57.539014       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1014 20:06:57.539089       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	E1014 20:06:57.539184       1 server.go:286] "handlers are not fully synchronized" err="context canceled"
	E1014 20:06:57.539291       1 shared_informer.go:352] "Unable to sync caches" logger="UnhandledError" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1014 20:06:57.539302       1 configmap_cafile_content.go:213] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1014 20:06:57.539317       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1014 20:06:57.539324       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1014 20:06:57.539394       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1014 20:06:57.539431       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1014 20:06:57.539437       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1014 20:06:57.539452       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Oct 14 20:07:03 pause-488160 kubelet[3656]: E1014 20:07:03.250452    3656 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"pause-488160\" not found" node="pause-488160"
	Oct 14 20:07:04 pause-488160 kubelet[3656]: E1014 20:07:04.247966    3656 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"pause-488160\" not found" node="pause-488160"
	Oct 14 20:07:04 pause-488160 kubelet[3656]: E1014 20:07:04.333628    3656 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"pause-488160\" not found" node="pause-488160"
	Oct 14 20:07:04 pause-488160 kubelet[3656]: I1014 20:07:04.474235    3656 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-pause-488160"
	Oct 14 20:07:04 pause-488160 kubelet[3656]: E1014 20:07:04.605499    3656 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-pause-488160\" already exists" pod="kube-system/kube-apiserver-pause-488160"
	Oct 14 20:07:04 pause-488160 kubelet[3656]: I1014 20:07:04.605702    3656 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-pause-488160"
	Oct 14 20:07:04 pause-488160 kubelet[3656]: E1014 20:07:04.617721    3656 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-pause-488160\" already exists" pod="kube-system/kube-controller-manager-pause-488160"
	Oct 14 20:07:04 pause-488160 kubelet[3656]: I1014 20:07:04.617751    3656 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-pause-488160"
	Oct 14 20:07:04 pause-488160 kubelet[3656]: E1014 20:07:04.644476    3656 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-pause-488160\" already exists" pod="kube-system/kube-scheduler-pause-488160"
	Oct 14 20:07:04 pause-488160 kubelet[3656]: I1014 20:07:04.644623    3656 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/etcd-pause-488160"
	Oct 14 20:07:04 pause-488160 kubelet[3656]: E1014 20:07:04.656940    3656 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-pause-488160\" already exists" pod="kube-system/etcd-pause-488160"
	Oct 14 20:07:04 pause-488160 kubelet[3656]: I1014 20:07:04.685417    3656 kubelet_node_status.go:124] "Node was previously registered" node="pause-488160"
	Oct 14 20:07:04 pause-488160 kubelet[3656]: I1014 20:07:04.685670    3656 kubelet_node_status.go:78] "Successfully registered node" node="pause-488160"
	Oct 14 20:07:04 pause-488160 kubelet[3656]: I1014 20:07:04.685708    3656 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Oct 14 20:07:04 pause-488160 kubelet[3656]: I1014 20:07:04.687747    3656 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Oct 14 20:07:04 pause-488160 kubelet[3656]: I1014 20:07:04.941877    3656 apiserver.go:52] "Watching apiserver"
	Oct 14 20:07:04 pause-488160 kubelet[3656]: I1014 20:07:04.992011    3656 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Oct 14 20:07:05 pause-488160 kubelet[3656]: I1014 20:07:05.085832    3656 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4d4af20d-b366-4ed8-a198-6aff03448749-xtables-lock\") pod \"kube-proxy-7g2cw\" (UID: \"4d4af20d-b366-4ed8-a198-6aff03448749\") " pod="kube-system/kube-proxy-7g2cw"
	Oct 14 20:07:05 pause-488160 kubelet[3656]: I1014 20:07:05.087315    3656 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4d4af20d-b366-4ed8-a198-6aff03448749-lib-modules\") pod \"kube-proxy-7g2cw\" (UID: \"4d4af20d-b366-4ed8-a198-6aff03448749\") " pod="kube-system/kube-proxy-7g2cw"
	Oct 14 20:07:05 pause-488160 kubelet[3656]: I1014 20:07:05.247829    3656 scope.go:117] "RemoveContainer" containerID="a7673602dd13a26d88d5b8cc01cb5805084d98654c1181e0ca52246c245d6871"
	Oct 14 20:07:05 pause-488160 kubelet[3656]: I1014 20:07:05.249310    3656 scope.go:117] "RemoveContainer" containerID="b1c20d90de825fa8bd5bbbc15a4636af8994f83f87b5182c20f342f198d2347f"
	Oct 14 20:07:10 pause-488160 kubelet[3656]: E1014 20:07:10.182481    3656 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1760472430181951440  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:127412}  inodes_used:{value:57}}"
	Oct 14 20:07:10 pause-488160 kubelet[3656]: E1014 20:07:10.182512    3656 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1760472430181951440  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:127412}  inodes_used:{value:57}}"
	Oct 14 20:07:20 pause-488160 kubelet[3656]: E1014 20:07:20.184338    3656 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1760472440183672617  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:127412}  inodes_used:{value:57}}"
	Oct 14 20:07:20 pause-488160 kubelet[3656]: E1014 20:07:20.184390    3656 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1760472440183672617  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:127412}  inodes_used:{value:57}}"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-488160 -n pause-488160
helpers_test.go:269: (dbg) Run:  kubectl --context pause-488160 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/SecondStartNoReconfiguration (75.81s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (542.63s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-lhkkm" [11c1df79-7653-4919-a97e-456c684eec60] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
start_stop_delete_test.go:272: ***** TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:272: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-158674 -n embed-certs-158674
start_stop_delete_test.go:272: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2025-10-14 20:25:17.084905107 +0000 UTC m=+4501.551050882
start_stop_delete_test.go:272: (dbg) Run:  kubectl --context embed-certs-158674 describe po kubernetes-dashboard-855c9754f9-lhkkm -n kubernetes-dashboard
start_stop_delete_test.go:272: (dbg) kubectl --context embed-certs-158674 describe po kubernetes-dashboard-855c9754f9-lhkkm -n kubernetes-dashboard:
Name:             kubernetes-dashboard-855c9754f9-lhkkm
Namespace:        kubernetes-dashboard
Priority:         0
Service Account:  kubernetes-dashboard
Node:             embed-certs-158674/192.168.50.78
Start Time:       Tue, 14 Oct 2025 20:16:10 +0000
Labels:           gcp-auth-skip-secret=true
k8s-app=kubernetes-dashboard
pod-template-hash=855c9754f9
Annotations:      <none>
Status:           Pending
IP:               10.244.0.9
IPs:
IP:           10.244.0.9
Controlled By:  ReplicaSet/kubernetes-dashboard-855c9754f9
Containers:
kubernetes-dashboard:
Container ID:  
Image:         docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
Image ID:      
Port:          9090/TCP
Host Port:     0/TCP
Args:
--namespace=kubernetes-dashboard
--enable-skip-login
--disable-settings-authorizer
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Liveness:       http-get http://:9090/ delay=30s timeout=30s period=10s #success=1 #failure=3
Environment:    <none>
Mounts:
/tmp from tmp-volume (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-85s8k (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
tmp-volume:
Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:     
SizeLimit:  <unset>
kube-api-access-85s8k:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              kubernetes.io/os=linux
Tolerations:                 node-role.kubernetes.io/master:NoSchedule
node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason            Age                     From               Message
----     ------            ----                    ----               -------
Warning  FailedScheduling  9m11s                   default-scheduler  0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.
Normal   Scheduled         9m6s                    default-scheduler  Successfully assigned kubernetes-dashboard/kubernetes-dashboard-855c9754f9-lhkkm to embed-certs-158674
Warning  Failed            5m30s (x2 over 7m49s)   kubelet            Failed to pull image "docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93": reading manifest sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Normal   Pulling           3m57s (x5 over 9m6s)    kubelet            Pulling image "docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
Warning  Failed            3m25s (x3 over 8m34s)   kubelet            Failed to pull image "docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93": fetching target platform image selected from manifest list: reading manifest sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed            3m25s (x5 over 8m34s)   kubelet            Error: ErrImagePull
Warning  Failed            2m21s (x16 over 8m34s)  kubelet            Error: ImagePullBackOff
Normal   BackOff           82s (x21 over 8m34s)    kubelet            Back-off pulling image "docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
start_stop_delete_test.go:272: (dbg) Run:  kubectl --context embed-certs-158674 logs kubernetes-dashboard-855c9754f9-lhkkm -n kubernetes-dashboard
start_stop_delete_test.go:272: (dbg) Non-zero exit: kubectl --context embed-certs-158674 logs kubernetes-dashboard-855c9754f9-lhkkm -n kubernetes-dashboard: exit status 1 (77.662127ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "kubernetes-dashboard" in pod "kubernetes-dashboard-855c9754f9-lhkkm" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
start_stop_delete_test.go:272: kubectl --context embed-certs-158674 logs kubernetes-dashboard-855c9754f9-lhkkm -n kubernetes-dashboard: exit status 1
start_stop_delete_test.go:273: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-158674 -n embed-certs-158674
helpers_test.go:252: <<< TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-158674 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-158674 logs -n 25: (1.311267647s)
helpers_test.go:260: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────┬───────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                      ARGS                                      │    PROFILE    │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────┼───────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p bridge-880673 sudo iptables -t nat -L -n -v                                 │ bridge-880673 │ jenkins │ v1.37.0 │ 14 Oct 25 20:19 UTC │ 14 Oct 25 20:19 UTC │
	│ ssh     │ -p bridge-880673 sudo systemctl status kubelet --all --full --no-pager         │ bridge-880673 │ jenkins │ v1.37.0 │ 14 Oct 25 20:19 UTC │ 14 Oct 25 20:19 UTC │
	│ ssh     │ -p bridge-880673 sudo systemctl cat kubelet --no-pager                         │ bridge-880673 │ jenkins │ v1.37.0 │ 14 Oct 25 20:19 UTC │ 14 Oct 25 20:19 UTC │
	│ ssh     │ -p bridge-880673 sudo journalctl -xeu kubelet --all --full --no-pager          │ bridge-880673 │ jenkins │ v1.37.0 │ 14 Oct 25 20:19 UTC │ 14 Oct 25 20:19 UTC │
	│ ssh     │ -p bridge-880673 sudo cat /etc/kubernetes/kubelet.conf                         │ bridge-880673 │ jenkins │ v1.37.0 │ 14 Oct 25 20:19 UTC │ 14 Oct 25 20:19 UTC │
	│ ssh     │ -p bridge-880673 sudo cat /var/lib/kubelet/config.yaml                         │ bridge-880673 │ jenkins │ v1.37.0 │ 14 Oct 25 20:19 UTC │ 14 Oct 25 20:19 UTC │
	│ ssh     │ -p bridge-880673 sudo systemctl status docker --all --full --no-pager          │ bridge-880673 │ jenkins │ v1.37.0 │ 14 Oct 25 20:19 UTC │                     │
	│ ssh     │ -p bridge-880673 sudo systemctl cat docker --no-pager                          │ bridge-880673 │ jenkins │ v1.37.0 │ 14 Oct 25 20:19 UTC │ 14 Oct 25 20:19 UTC │
	│ ssh     │ -p bridge-880673 sudo cat /etc/docker/daemon.json                              │ bridge-880673 │ jenkins │ v1.37.0 │ 14 Oct 25 20:19 UTC │ 14 Oct 25 20:19 UTC │
	│ ssh     │ -p bridge-880673 sudo docker system info                                       │ bridge-880673 │ jenkins │ v1.37.0 │ 14 Oct 25 20:19 UTC │                     │
	│ ssh     │ -p bridge-880673 sudo systemctl status cri-docker --all --full --no-pager      │ bridge-880673 │ jenkins │ v1.37.0 │ 14 Oct 25 20:19 UTC │                     │
	│ ssh     │ -p bridge-880673 sudo systemctl cat cri-docker --no-pager                      │ bridge-880673 │ jenkins │ v1.37.0 │ 14 Oct 25 20:19 UTC │ 14 Oct 25 20:19 UTC │
	│ ssh     │ -p bridge-880673 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf │ bridge-880673 │ jenkins │ v1.37.0 │ 14 Oct 25 20:19 UTC │                     │
	│ ssh     │ -p bridge-880673 sudo cat /usr/lib/systemd/system/cri-docker.service           │ bridge-880673 │ jenkins │ v1.37.0 │ 14 Oct 25 20:19 UTC │ 14 Oct 25 20:19 UTC │
	│ ssh     │ -p bridge-880673 sudo cri-dockerd --version                                    │ bridge-880673 │ jenkins │ v1.37.0 │ 14 Oct 25 20:19 UTC │ 14 Oct 25 20:19 UTC │
	│ ssh     │ -p bridge-880673 sudo systemctl status containerd --all --full --no-pager      │ bridge-880673 │ jenkins │ v1.37.0 │ 14 Oct 25 20:19 UTC │                     │
	│ ssh     │ -p bridge-880673 sudo systemctl cat containerd --no-pager                      │ bridge-880673 │ jenkins │ v1.37.0 │ 14 Oct 25 20:19 UTC │ 14 Oct 25 20:19 UTC │
	│ ssh     │ -p bridge-880673 sudo cat /lib/systemd/system/containerd.service               │ bridge-880673 │ jenkins │ v1.37.0 │ 14 Oct 25 20:19 UTC │ 14 Oct 25 20:19 UTC │
	│ ssh     │ -p bridge-880673 sudo cat /etc/containerd/config.toml                          │ bridge-880673 │ jenkins │ v1.37.0 │ 14 Oct 25 20:19 UTC │ 14 Oct 25 20:19 UTC │
	│ ssh     │ -p bridge-880673 sudo containerd config dump                                   │ bridge-880673 │ jenkins │ v1.37.0 │ 14 Oct 25 20:19 UTC │ 14 Oct 25 20:19 UTC │
	│ ssh     │ -p bridge-880673 sudo systemctl status crio --all --full --no-pager            │ bridge-880673 │ jenkins │ v1.37.0 │ 14 Oct 25 20:19 UTC │ 14 Oct 25 20:19 UTC │
	│ ssh     │ -p bridge-880673 sudo systemctl cat crio --no-pager                            │ bridge-880673 │ jenkins │ v1.37.0 │ 14 Oct 25 20:19 UTC │ 14 Oct 25 20:19 UTC │
	│ ssh     │ -p bridge-880673 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;  │ bridge-880673 │ jenkins │ v1.37.0 │ 14 Oct 25 20:19 UTC │ 14 Oct 25 20:19 UTC │
	│ ssh     │ -p bridge-880673 sudo crio config                                              │ bridge-880673 │ jenkins │ v1.37.0 │ 14 Oct 25 20:19 UTC │ 14 Oct 25 20:19 UTC │
	│ delete  │ -p bridge-880673                                                               │ bridge-880673 │ jenkins │ v1.37.0 │ 14 Oct 25 20:19 UTC │ 14 Oct 25 20:19 UTC │
	└─────────┴────────────────────────────────────────────────────────────────────────────────┴───────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/14 20:17:32
	Running on machine: ubuntu-20-agent-6
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1014 20:17:32.989439  421402 out.go:360] Setting OutFile to fd 1 ...
	I1014 20:17:32.989829  421402 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1014 20:17:32.989845  421402 out.go:374] Setting ErrFile to fd 2...
	I1014 20:17:32.989851  421402 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1014 20:17:32.990172  421402 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-364627/.minikube/bin
	I1014 20:17:32.991056  421402 out.go:368] Setting JSON to false
	I1014 20:17:32.992860  421402 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":7196,"bootTime":1760465857,"procs":326,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1014 20:17:32.992967  421402 start.go:141] virtualization: kvm guest
	I1014 20:17:32.995056  421402 out.go:179] * [bridge-880673] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1014 20:17:32.996549  421402 out.go:179]   - MINIKUBE_LOCATION=21409
	I1014 20:17:32.996532  421402 notify.go:220] Checking for updates...
	I1014 20:17:33.000156  421402 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1014 20:17:33.001647  421402 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21409-364627/kubeconfig
	I1014 20:17:33.003125  421402 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21409-364627/.minikube
	I1014 20:17:33.007989  421402 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1014 20:17:33.009484  421402 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1014 20:17:33.011588  421402 config.go:182] Loaded profile config "embed-certs-158674": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1014 20:17:33.011769  421402 config.go:182] Loaded profile config "enable-default-cni-880673": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1014 20:17:33.011928  421402 config.go:182] Loaded profile config "flannel-880673": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1014 20:17:33.012093  421402 driver.go:421] Setting default libvirt URI to qemu:///system
	I1014 20:17:33.059258  421402 out.go:179] * Using the kvm2 driver based on user configuration
	I1014 20:17:33.060454  421402 start.go:305] selected driver: kvm2
	I1014 20:17:33.060476  421402 start.go:925] validating driver "kvm2" against <nil>
	I1014 20:17:33.060492  421402 start.go:936] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1014 20:17:33.061267  421402 install.go:66] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1014 20:17:33.061387  421402 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21409-364627/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1014 20:17:33.077958  421402 install.go:163] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I1014 20:17:33.077999  421402 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21409-364627/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1014 20:17:33.095092  421402 install.go:163] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I1014 20:17:33.095155  421402 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1014 20:17:33.095523  421402 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1014 20:17:33.095569  421402 cni.go:84] Creating CNI manager for "bridge"
	I1014 20:17:33.095578  421402 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1014 20:17:33.095654  421402 start.go:349] cluster config:
	{Name:bridge-880673 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:bridge-880673 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I1014 20:17:33.095800  421402 iso.go:125] acquiring lock: {Name:mk340b9c7aeea9c9c1eaad0b6560c7e40df5ab00 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1014 20:17:33.098425  421402 out.go:179] * Starting "bridge-880673" primary control-plane node in "bridge-880673" cluster
	I1014 20:17:30.010628  421087 out.go:252] * Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1014 20:17:30.010787  421087 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 20:17:30.010834  421087 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 20:17:30.028645  421087 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35821
	I1014 20:17:30.029184  421087 main.go:141] libmachine: () Calling .GetVersion
	I1014 20:17:30.029738  421087 main.go:141] libmachine: Using API Version  1
	I1014 20:17:30.029764  421087 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 20:17:30.030161  421087 main.go:141] libmachine: () Calling .GetMachineName
	I1014 20:17:30.030410  421087 main.go:141] libmachine: (flannel-880673) Calling .GetMachineName
	I1014 20:17:30.030581  421087 main.go:141] libmachine: (flannel-880673) Calling .DriverName
	I1014 20:17:30.030784  421087 start.go:159] libmachine.API.Create for "flannel-880673" (driver="kvm2")
	I1014 20:17:30.030820  421087 client.go:168] LocalClient.Create starting
	I1014 20:17:30.030865  421087 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21409-364627/.minikube/certs/ca.pem
	I1014 20:17:30.030912  421087 main.go:141] libmachine: Decoding PEM data...
	I1014 20:17:30.030940  421087 main.go:141] libmachine: Parsing certificate...
	I1014 20:17:30.031019  421087 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21409-364627/.minikube/certs/cert.pem
	I1014 20:17:30.031060  421087 main.go:141] libmachine: Decoding PEM data...
	I1014 20:17:30.031074  421087 main.go:141] libmachine: Parsing certificate...
	I1014 20:17:30.031099  421087 main.go:141] libmachine: Running pre-create checks...
	I1014 20:17:30.031112  421087 main.go:141] libmachine: (flannel-880673) Calling .PreCreateCheck
	I1014 20:17:30.031527  421087 main.go:141] libmachine: (flannel-880673) Calling .GetConfigRaw
	I1014 20:17:30.032027  421087 main.go:141] libmachine: Creating machine...
	I1014 20:17:30.032044  421087 main.go:141] libmachine: (flannel-880673) Calling .Create
	I1014 20:17:30.032196  421087 main.go:141] libmachine: (flannel-880673) creating domain...
	I1014 20:17:30.032211  421087 main.go:141] libmachine: (flannel-880673) creating network...
	I1014 20:17:30.033766  421087 main.go:141] libmachine: (flannel-880673) DBG | found existing default network
	I1014 20:17:30.033965  421087 main.go:141] libmachine: (flannel-880673) DBG | <network connections='3'>
	I1014 20:17:30.033988  421087 main.go:141] libmachine: (flannel-880673) DBG |   <name>default</name>
	I1014 20:17:30.033999  421087 main.go:141] libmachine: (flannel-880673) DBG |   <uuid>c61344c2-dba2-46dd-a21a-34776d235985</uuid>
	I1014 20:17:30.034006  421087 main.go:141] libmachine: (flannel-880673) DBG |   <forward mode='nat'>
	I1014 20:17:30.034014  421087 main.go:141] libmachine: (flannel-880673) DBG |     <nat>
	I1014 20:17:30.034024  421087 main.go:141] libmachine: (flannel-880673) DBG |       <port start='1024' end='65535'/>
	I1014 20:17:30.034032  421087 main.go:141] libmachine: (flannel-880673) DBG |     </nat>
	I1014 20:17:30.034042  421087 main.go:141] libmachine: (flannel-880673) DBG |   </forward>
	I1014 20:17:30.034051  421087 main.go:141] libmachine: (flannel-880673) DBG |   <bridge name='virbr0' stp='on' delay='0'/>
	I1014 20:17:30.034060  421087 main.go:141] libmachine: (flannel-880673) DBG |   <mac address='52:54:00:10:a2:1d'/>
	I1014 20:17:30.034079  421087 main.go:141] libmachine: (flannel-880673) DBG |   <ip address='192.168.122.1' netmask='255.255.255.0'>
	I1014 20:17:30.034096  421087 main.go:141] libmachine: (flannel-880673) DBG |     <dhcp>
	I1014 20:17:30.034125  421087 main.go:141] libmachine: (flannel-880673) DBG |       <range start='192.168.122.2' end='192.168.122.254'/>
	I1014 20:17:30.034137  421087 main.go:141] libmachine: (flannel-880673) DBG |     </dhcp>
	I1014 20:17:30.034145  421087 main.go:141] libmachine: (flannel-880673) DBG |   </ip>
	I1014 20:17:30.034152  421087 main.go:141] libmachine: (flannel-880673) DBG | </network>
	I1014 20:17:30.034162  421087 main.go:141] libmachine: (flannel-880673) DBG | 
	I1014 20:17:30.035359  421087 main.go:141] libmachine: (flannel-880673) DBG | I1014 20:17:30.035159  421144 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000123b20}
	I1014 20:17:30.035396  421087 main.go:141] libmachine: (flannel-880673) DBG | defining private network:
	I1014 20:17:30.035426  421087 main.go:141] libmachine: (flannel-880673) DBG | 
	I1014 20:17:30.035439  421087 main.go:141] libmachine: (flannel-880673) DBG | <network>
	I1014 20:17:30.035447  421087 main.go:141] libmachine: (flannel-880673) DBG |   <name>mk-flannel-880673</name>
	I1014 20:17:30.035453  421087 main.go:141] libmachine: (flannel-880673) DBG |   <dns enable='no'/>
	I1014 20:17:30.035461  421087 main.go:141] libmachine: (flannel-880673) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I1014 20:17:30.035467  421087 main.go:141] libmachine: (flannel-880673) DBG |     <dhcp>
	I1014 20:17:30.035475  421087 main.go:141] libmachine: (flannel-880673) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I1014 20:17:30.035480  421087 main.go:141] libmachine: (flannel-880673) DBG |     </dhcp>
	I1014 20:17:30.035487  421087 main.go:141] libmachine: (flannel-880673) DBG |   </ip>
	I1014 20:17:30.035493  421087 main.go:141] libmachine: (flannel-880673) DBG | </network>
	I1014 20:17:30.035502  421087 main.go:141] libmachine: (flannel-880673) DBG | 
	I1014 20:17:30.041667  421087 main.go:141] libmachine: (flannel-880673) DBG | creating private network mk-flannel-880673 192.168.39.0/24...
	I1014 20:17:30.127637  421087 main.go:141] libmachine: (flannel-880673) DBG | private network mk-flannel-880673 192.168.39.0/24 created
	I1014 20:17:30.127999  421087 main.go:141] libmachine: (flannel-880673) DBG | <network>
	I1014 20:17:30.128023  421087 main.go:141] libmachine: (flannel-880673) DBG |   <name>mk-flannel-880673</name>
	I1014 20:17:30.128038  421087 main.go:141] libmachine: (flannel-880673) setting up store path in /home/jenkins/minikube-integration/21409-364627/.minikube/machines/flannel-880673 ...
	I1014 20:17:30.128060  421087 main.go:141] libmachine: (flannel-880673) building disk image from file:///home/jenkins/minikube-integration/21409-364627/.minikube/cache/iso/amd64/minikube-v1.37.0-1758198818-20370-amd64.iso
	I1014 20:17:30.128074  421087 main.go:141] libmachine: (flannel-880673) DBG |   <uuid>c5a771e5-e794-47b9-85b0-f17e7652bf2d</uuid>
	I1014 20:17:30.128085  421087 main.go:141] libmachine: (flannel-880673) DBG |   <bridge name='virbr2' stp='on' delay='0'/>
	I1014 20:17:30.128109  421087 main.go:141] libmachine: (flannel-880673) DBG |   <mac address='52:54:00:5d:dc:bd'/>
	I1014 20:17:30.128132  421087 main.go:141] libmachine: (flannel-880673) Downloading /home/jenkins/minikube-integration/21409-364627/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/21409-364627/.minikube/cache/iso/amd64/minikube-v1.37.0-1758198818-20370-amd64.iso...
	I1014 20:17:30.128141  421087 main.go:141] libmachine: (flannel-880673) DBG |   <dns enable='no'/>
	I1014 20:17:30.128155  421087 main.go:141] libmachine: (flannel-880673) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I1014 20:17:30.128162  421087 main.go:141] libmachine: (flannel-880673) DBG |     <dhcp>
	I1014 20:17:30.128172  421087 main.go:141] libmachine: (flannel-880673) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I1014 20:17:30.128182  421087 main.go:141] libmachine: (flannel-880673) DBG |     </dhcp>
	I1014 20:17:30.128191  421087 main.go:141] libmachine: (flannel-880673) DBG |   </ip>
	I1014 20:17:30.128201  421087 main.go:141] libmachine: (flannel-880673) DBG | </network>
	I1014 20:17:30.128212  421087 main.go:141] libmachine: (flannel-880673) DBG | 
	I1014 20:17:30.128237  421087 main.go:141] libmachine: (flannel-880673) DBG | I1014 20:17:30.127980  421144 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/21409-364627/.minikube
	I1014 20:17:30.429228  421087 main.go:141] libmachine: (flannel-880673) DBG | I1014 20:17:30.429048  421144 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/21409-364627/.minikube/machines/flannel-880673/id_rsa...
	I1014 20:17:31.000581  421087 main.go:141] libmachine: (flannel-880673) DBG | I1014 20:17:31.000432  421144 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/21409-364627/.minikube/machines/flannel-880673/flannel-880673.rawdisk...
	I1014 20:17:31.000623  421087 main.go:141] libmachine: (flannel-880673) DBG | Writing magic tar header
	I1014 20:17:31.000650  421087 main.go:141] libmachine: (flannel-880673) DBG | Writing SSH key tar header
	I1014 20:17:31.000710  421087 main.go:141] libmachine: (flannel-880673) DBG | I1014 20:17:31.000643  421144 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/21409-364627/.minikube/machines/flannel-880673 ...
	I1014 20:17:31.000786  421087 main.go:141] libmachine: (flannel-880673) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21409-364627/.minikube/machines/flannel-880673
	I1014 20:17:31.000844  421087 main.go:141] libmachine: (flannel-880673) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21409-364627/.minikube/machines
	I1014 20:17:31.000876  421087 main.go:141] libmachine: (flannel-880673) setting executable bit set on /home/jenkins/minikube-integration/21409-364627/.minikube/machines/flannel-880673 (perms=drwx------)
	I1014 20:17:31.000888  421087 main.go:141] libmachine: (flannel-880673) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21409-364627/.minikube
	I1014 20:17:31.000907  421087 main.go:141] libmachine: (flannel-880673) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21409-364627
	I1014 20:17:31.000915  421087 main.go:141] libmachine: (flannel-880673) DBG | checking permissions on dir: /home/jenkins/minikube-integration
	I1014 20:17:31.000940  421087 main.go:141] libmachine: (flannel-880673) setting executable bit set on /home/jenkins/minikube-integration/21409-364627/.minikube/machines (perms=drwxr-xr-x)
	I1014 20:17:31.000951  421087 main.go:141] libmachine: (flannel-880673) DBG | checking permissions on dir: /home/jenkins
	I1014 20:17:31.000963  421087 main.go:141] libmachine: (flannel-880673) DBG | checking permissions on dir: /home
	I1014 20:17:31.000973  421087 main.go:141] libmachine: (flannel-880673) setting executable bit set on /home/jenkins/minikube-integration/21409-364627/.minikube (perms=drwxr-xr-x)
	I1014 20:17:31.000994  421087 main.go:141] libmachine: (flannel-880673) setting executable bit set on /home/jenkins/minikube-integration/21409-364627 (perms=drwxrwxr-x)
	I1014 20:17:31.001007  421087 main.go:141] libmachine: (flannel-880673) setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1014 20:17:31.001019  421087 main.go:141] libmachine: (flannel-880673) setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1014 20:17:31.001028  421087 main.go:141] libmachine: (flannel-880673) defining domain...
	I1014 20:17:31.001071  421087 main.go:141] libmachine: (flannel-880673) DBG | skipping /home - not owner
	I1014 20:17:31.002262  421087 main.go:141] libmachine: (flannel-880673) defining domain using XML: 
	I1014 20:17:31.002290  421087 main.go:141] libmachine: (flannel-880673) <domain type='kvm'>
	I1014 20:17:31.002302  421087 main.go:141] libmachine: (flannel-880673)   <name>flannel-880673</name>
	I1014 20:17:31.002327  421087 main.go:141] libmachine: (flannel-880673)   <memory unit='MiB'>3072</memory>
	I1014 20:17:31.002358  421087 main.go:141] libmachine: (flannel-880673)   <vcpu>2</vcpu>
	I1014 20:17:31.002385  421087 main.go:141] libmachine: (flannel-880673)   <features>
	I1014 20:17:31.002407  421087 main.go:141] libmachine: (flannel-880673)     <acpi/>
	I1014 20:17:31.002421  421087 main.go:141] libmachine: (flannel-880673)     <apic/>
	I1014 20:17:31.002433  421087 main.go:141] libmachine: (flannel-880673)     <pae/>
	I1014 20:17:31.002439  421087 main.go:141] libmachine: (flannel-880673)   </features>
	I1014 20:17:31.002448  421087 main.go:141] libmachine: (flannel-880673)   <cpu mode='host-passthrough'>
	I1014 20:17:31.002459  421087 main.go:141] libmachine: (flannel-880673)   </cpu>
	I1014 20:17:31.002483  421087 main.go:141] libmachine: (flannel-880673)   <os>
	I1014 20:17:31.002501  421087 main.go:141] libmachine: (flannel-880673)     <type>hvm</type>
	I1014 20:17:31.002510  421087 main.go:141] libmachine: (flannel-880673)     <boot dev='cdrom'/>
	I1014 20:17:31.002521  421087 main.go:141] libmachine: (flannel-880673)     <boot dev='hd'/>
	I1014 20:17:31.002566  421087 main.go:141] libmachine: (flannel-880673)     <bootmenu enable='no'/>
	I1014 20:17:31.002586  421087 main.go:141] libmachine: (flannel-880673)   </os>
	I1014 20:17:31.002601  421087 main.go:141] libmachine: (flannel-880673)   <devices>
	I1014 20:17:31.002610  421087 main.go:141] libmachine: (flannel-880673)     <disk type='file' device='cdrom'>
	I1014 20:17:31.002633  421087 main.go:141] libmachine: (flannel-880673)       <source file='/home/jenkins/minikube-integration/21409-364627/.minikube/machines/flannel-880673/boot2docker.iso'/>
	I1014 20:17:31.002643  421087 main.go:141] libmachine: (flannel-880673)       <target dev='hdc' bus='scsi'/>
	I1014 20:17:31.002656  421087 main.go:141] libmachine: (flannel-880673)       <readonly/>
	I1014 20:17:31.002663  421087 main.go:141] libmachine: (flannel-880673)     </disk>
	I1014 20:17:31.002674  421087 main.go:141] libmachine: (flannel-880673)     <disk type='file' device='disk'>
	I1014 20:17:31.002687  421087 main.go:141] libmachine: (flannel-880673)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1014 20:17:31.002699  421087 main.go:141] libmachine: (flannel-880673)       <source file='/home/jenkins/minikube-integration/21409-364627/.minikube/machines/flannel-880673/flannel-880673.rawdisk'/>
	I1014 20:17:31.002712  421087 main.go:141] libmachine: (flannel-880673)       <target dev='hda' bus='virtio'/>
	I1014 20:17:31.002726  421087 main.go:141] libmachine: (flannel-880673)     </disk>
	I1014 20:17:31.002738  421087 main.go:141] libmachine: (flannel-880673)     <interface type='network'>
	I1014 20:17:31.002750  421087 main.go:141] libmachine: (flannel-880673)       <source network='mk-flannel-880673'/>
	I1014 20:17:31.002759  421087 main.go:141] libmachine: (flannel-880673)       <model type='virtio'/>
	I1014 20:17:31.002782  421087 main.go:141] libmachine: (flannel-880673)     </interface>
	I1014 20:17:31.002801  421087 main.go:141] libmachine: (flannel-880673)     <interface type='network'>
	I1014 20:17:31.002819  421087 main.go:141] libmachine: (flannel-880673)       <source network='default'/>
	I1014 20:17:31.002840  421087 main.go:141] libmachine: (flannel-880673)       <model type='virtio'/>
	I1014 20:17:31.002852  421087 main.go:141] libmachine: (flannel-880673)     </interface>
	I1014 20:17:31.002871  421087 main.go:141] libmachine: (flannel-880673)     <serial type='pty'>
	I1014 20:17:31.002884  421087 main.go:141] libmachine: (flannel-880673)       <target port='0'/>
	I1014 20:17:31.002895  421087 main.go:141] libmachine: (flannel-880673)     </serial>
	I1014 20:17:31.002909  421087 main.go:141] libmachine: (flannel-880673)     <console type='pty'>
	I1014 20:17:31.002916  421087 main.go:141] libmachine: (flannel-880673)       <target type='serial' port='0'/>
	I1014 20:17:31.002927  421087 main.go:141] libmachine: (flannel-880673)     </console>
	I1014 20:17:31.002937  421087 main.go:141] libmachine: (flannel-880673)     <rng model='virtio'>
	I1014 20:17:31.002949  421087 main.go:141] libmachine: (flannel-880673)       <backend model='random'>/dev/random</backend>
	I1014 20:17:31.002962  421087 main.go:141] libmachine: (flannel-880673)     </rng>
	I1014 20:17:31.002974  421087 main.go:141] libmachine: (flannel-880673)   </devices>
	I1014 20:17:31.002982  421087 main.go:141] libmachine: (flannel-880673) </domain>
	I1014 20:17:31.002993  421087 main.go:141] libmachine: (flannel-880673) 
	I1014 20:17:31.008419  421087 main.go:141] libmachine: (flannel-880673) DBG | domain flannel-880673 has defined MAC address 52:54:00:25:31:9e in network default
	I1014 20:17:31.009073  421087 main.go:141] libmachine: (flannel-880673) starting domain...
	I1014 20:17:31.009113  421087 main.go:141] libmachine: (flannel-880673) DBG | domain flannel-880673 has defined MAC address 52:54:00:d6:0d:31 in network mk-flannel-880673
	I1014 20:17:31.009122  421087 main.go:141] libmachine: (flannel-880673) ensuring networks are active...
	I1014 20:17:31.009886  421087 main.go:141] libmachine: (flannel-880673) Ensuring network default is active
	I1014 20:17:31.010346  421087 main.go:141] libmachine: (flannel-880673) Ensuring network mk-flannel-880673 is active
	I1014 20:17:31.011061  421087 main.go:141] libmachine: (flannel-880673) getting domain XML...
	I1014 20:17:31.012375  421087 main.go:141] libmachine: (flannel-880673) DBG | starting domain XML:
	I1014 20:17:31.012399  421087 main.go:141] libmachine: (flannel-880673) DBG | <domain type='kvm'>
	I1014 20:17:31.012419  421087 main.go:141] libmachine: (flannel-880673) DBG |   <name>flannel-880673</name>
	I1014 20:17:31.012437  421087 main.go:141] libmachine: (flannel-880673) DBG |   <uuid>dd12b5ae-cea5-4553-b657-8781ab815471</uuid>
	I1014 20:17:31.012446  421087 main.go:141] libmachine: (flannel-880673) DBG |   <memory unit='KiB'>3145728</memory>
	I1014 20:17:31.012457  421087 main.go:141] libmachine: (flannel-880673) DBG |   <currentMemory unit='KiB'>3145728</currentMemory>
	I1014 20:17:31.012467  421087 main.go:141] libmachine: (flannel-880673) DBG |   <vcpu placement='static'>2</vcpu>
	I1014 20:17:31.012479  421087 main.go:141] libmachine: (flannel-880673) DBG |   <os>
	I1014 20:17:31.012491  421087 main.go:141] libmachine: (flannel-880673) DBG |     <type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
	I1014 20:17:31.012501  421087 main.go:141] libmachine: (flannel-880673) DBG |     <boot dev='cdrom'/>
	I1014 20:17:31.012529  421087 main.go:141] libmachine: (flannel-880673) DBG |     <boot dev='hd'/>
	I1014 20:17:31.012552  421087 main.go:141] libmachine: (flannel-880673) DBG |     <bootmenu enable='no'/>
	I1014 20:17:31.012564  421087 main.go:141] libmachine: (flannel-880673) DBG |   </os>
	I1014 20:17:31.012574  421087 main.go:141] libmachine: (flannel-880673) DBG |   <features>
	I1014 20:17:31.012583  421087 main.go:141] libmachine: (flannel-880673) DBG |     <acpi/>
	I1014 20:17:31.012592  421087 main.go:141] libmachine: (flannel-880673) DBG |     <apic/>
	I1014 20:17:31.012607  421087 main.go:141] libmachine: (flannel-880673) DBG |     <pae/>
	I1014 20:17:31.012616  421087 main.go:141] libmachine: (flannel-880673) DBG |   </features>
	I1014 20:17:31.012623  421087 main.go:141] libmachine: (flannel-880673) DBG |   <cpu mode='host-passthrough' check='none' migratable='on'/>
	I1014 20:17:31.012630  421087 main.go:141] libmachine: (flannel-880673) DBG |   <clock offset='utc'/>
	I1014 20:17:31.012646  421087 main.go:141] libmachine: (flannel-880673) DBG |   <on_poweroff>destroy</on_poweroff>
	I1014 20:17:31.012672  421087 main.go:141] libmachine: (flannel-880673) DBG |   <on_reboot>restart</on_reboot>
	I1014 20:17:31.012682  421087 main.go:141] libmachine: (flannel-880673) DBG |   <on_crash>destroy</on_crash>
	I1014 20:17:31.012686  421087 main.go:141] libmachine: (flannel-880673) DBG |   <devices>
	I1014 20:17:31.012695  421087 main.go:141] libmachine: (flannel-880673) DBG |     <emulator>/usr/bin/qemu-system-x86_64</emulator>
	I1014 20:17:31.012702  421087 main.go:141] libmachine: (flannel-880673) DBG |     <disk type='file' device='cdrom'>
	I1014 20:17:31.012715  421087 main.go:141] libmachine: (flannel-880673) DBG |       <driver name='qemu' type='raw'/>
	I1014 20:17:31.012731  421087 main.go:141] libmachine: (flannel-880673) DBG |       <source file='/home/jenkins/minikube-integration/21409-364627/.minikube/machines/flannel-880673/boot2docker.iso'/>
	I1014 20:17:31.012743  421087 main.go:141] libmachine: (flannel-880673) DBG |       <target dev='hdc' bus='scsi'/>
	I1014 20:17:31.012756  421087 main.go:141] libmachine: (flannel-880673) DBG |       <readonly/>
	I1014 20:17:31.012783  421087 main.go:141] libmachine: (flannel-880673) DBG |       <address type='drive' controller='0' bus='0' target='0' unit='2'/>
	I1014 20:17:31.012803  421087 main.go:141] libmachine: (flannel-880673) DBG |     </disk>
	I1014 20:17:31.012819  421087 main.go:141] libmachine: (flannel-880673) DBG |     <disk type='file' device='disk'>
	I1014 20:17:31.012836  421087 main.go:141] libmachine: (flannel-880673) DBG |       <driver name='qemu' type='raw' io='threads'/>
	I1014 20:17:31.012852  421087 main.go:141] libmachine: (flannel-880673) DBG |       <source file='/home/jenkins/minikube-integration/21409-364627/.minikube/machines/flannel-880673/flannel-880673.rawdisk'/>
	I1014 20:17:31.012861  421087 main.go:141] libmachine: (flannel-880673) DBG |       <target dev='hda' bus='virtio'/>
	I1014 20:17:31.012869  421087 main.go:141] libmachine: (flannel-880673) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
	I1014 20:17:31.012879  421087 main.go:141] libmachine: (flannel-880673) DBG |     </disk>
	I1014 20:17:31.012890  421087 main.go:141] libmachine: (flannel-880673) DBG |     <controller type='usb' index='0' model='piix3-uhci'>
	I1014 20:17:31.012899  421087 main.go:141] libmachine: (flannel-880673) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
	I1014 20:17:31.012919  421087 main.go:141] libmachine: (flannel-880673) DBG |     </controller>
	I1014 20:17:31.012938  421087 main.go:141] libmachine: (flannel-880673) DBG |     <controller type='pci' index='0' model='pci-root'/>
	I1014 20:17:31.012951  421087 main.go:141] libmachine: (flannel-880673) DBG |     <controller type='scsi' index='0' model='lsilogic'>
	I1014 20:17:31.012961  421087 main.go:141] libmachine: (flannel-880673) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
	I1014 20:17:31.012995  421087 main.go:141] libmachine: (flannel-880673) DBG |     </controller>
	I1014 20:17:31.013011  421087 main.go:141] libmachine: (flannel-880673) DBG |     <interface type='network'>
	I1014 20:17:31.013022  421087 main.go:141] libmachine: (flannel-880673) DBG |       <mac address='52:54:00:d6:0d:31'/>
	I1014 20:17:31.013033  421087 main.go:141] libmachine: (flannel-880673) DBG |       <source network='mk-flannel-880673'/>
	I1014 20:17:31.013043  421087 main.go:141] libmachine: (flannel-880673) DBG |       <model type='virtio'/>
	I1014 20:17:31.013060  421087 main.go:141] libmachine: (flannel-880673) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
	I1014 20:17:31.013072  421087 main.go:141] libmachine: (flannel-880673) DBG |     </interface>
	I1014 20:17:31.013092  421087 main.go:141] libmachine: (flannel-880673) DBG |     <interface type='network'>
	I1014 20:17:31.013105  421087 main.go:141] libmachine: (flannel-880673) DBG |       <mac address='52:54:00:25:31:9e'/>
	I1014 20:17:31.013115  421087 main.go:141] libmachine: (flannel-880673) DBG |       <source network='default'/>
	I1014 20:17:31.013126  421087 main.go:141] libmachine: (flannel-880673) DBG |       <model type='virtio'/>
	I1014 20:17:31.013147  421087 main.go:141] libmachine: (flannel-880673) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
	I1014 20:17:31.013159  421087 main.go:141] libmachine: (flannel-880673) DBG |     </interface>
	I1014 20:17:31.013166  421087 main.go:141] libmachine: (flannel-880673) DBG |     <serial type='pty'>
	I1014 20:17:31.013175  421087 main.go:141] libmachine: (flannel-880673) DBG |       <target type='isa-serial' port='0'>
	I1014 20:17:31.013185  421087 main.go:141] libmachine: (flannel-880673) DBG |         <model name='isa-serial'/>
	I1014 20:17:31.013193  421087 main.go:141] libmachine: (flannel-880673) DBG |       </target>
	I1014 20:17:31.013202  421087 main.go:141] libmachine: (flannel-880673) DBG |     </serial>
	I1014 20:17:31.013245  421087 main.go:141] libmachine: (flannel-880673) DBG |     <console type='pty'>
	I1014 20:17:31.013278  421087 main.go:141] libmachine: (flannel-880673) DBG |       <target type='serial' port='0'/>
	I1014 20:17:31.013288  421087 main.go:141] libmachine: (flannel-880673) DBG |     </console>
	I1014 20:17:31.013295  421087 main.go:141] libmachine: (flannel-880673) DBG |     <input type='mouse' bus='ps2'/>
	I1014 20:17:31.013304  421087 main.go:141] libmachine: (flannel-880673) DBG |     <input type='keyboard' bus='ps2'/>
	I1014 20:17:31.013323  421087 main.go:141] libmachine: (flannel-880673) DBG |     <audio id='1' type='none'/>
	I1014 20:17:31.013338  421087 main.go:141] libmachine: (flannel-880673) DBG |     <memballoon model='virtio'>
	I1014 20:17:31.013347  421087 main.go:141] libmachine: (flannel-880673) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
	I1014 20:17:31.013355  421087 main.go:141] libmachine: (flannel-880673) DBG |     </memballoon>
	I1014 20:17:31.013365  421087 main.go:141] libmachine: (flannel-880673) DBG |     <rng model='virtio'>
	I1014 20:17:31.013374  421087 main.go:141] libmachine: (flannel-880673) DBG |       <backend model='random'>/dev/random</backend>
	I1014 20:17:31.013389  421087 main.go:141] libmachine: (flannel-880673) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
	I1014 20:17:31.013396  421087 main.go:141] libmachine: (flannel-880673) DBG |     </rng>
	I1014 20:17:31.013402  421087 main.go:141] libmachine: (flannel-880673) DBG |   </devices>
	I1014 20:17:31.013410  421087 main.go:141] libmachine: (flannel-880673) DBG | </domain>
	I1014 20:17:31.013416  421087 main.go:141] libmachine: (flannel-880673) DBG | 
	I1014 20:17:32.497442  421087 main.go:141] libmachine: (flannel-880673) waiting for domain to start...
	I1014 20:17:32.499079  421087 main.go:141] libmachine: (flannel-880673) domain is now running
	I1014 20:17:32.499111  421087 main.go:141] libmachine: (flannel-880673) waiting for IP...
	I1014 20:17:32.500226  421087 main.go:141] libmachine: (flannel-880673) DBG | domain flannel-880673 has defined MAC address 52:54:00:d6:0d:31 in network mk-flannel-880673
	I1014 20:17:32.501130  421087 main.go:141] libmachine: (flannel-880673) DBG | no network interface addresses found for domain flannel-880673 (source=lease)
	I1014 20:17:32.501160  421087 main.go:141] libmachine: (flannel-880673) DBG | trying to list again with source=arp
	I1014 20:17:32.501671  421087 main.go:141] libmachine: (flannel-880673) DBG | unable to find current IP address of domain flannel-880673 in network mk-flannel-880673 (interfaces detected: [])
	I1014 20:17:32.501780  421087 main.go:141] libmachine: (flannel-880673) DBG | I1014 20:17:32.501708  421144 retry.go:31] will retry after 305.051771ms: waiting for domain to come up
	I1014 20:17:32.808976  421087 main.go:141] libmachine: (flannel-880673) DBG | domain flannel-880673 has defined MAC address 52:54:00:d6:0d:31 in network mk-flannel-880673
	I1014 20:17:32.810041  421087 main.go:141] libmachine: (flannel-880673) DBG | no network interface addresses found for domain flannel-880673 (source=lease)
	I1014 20:17:32.810067  421087 main.go:141] libmachine: (flannel-880673) DBG | trying to list again with source=arp
	I1014 20:17:32.810866  421087 main.go:141] libmachine: (flannel-880673) DBG | unable to find current IP address of domain flannel-880673 in network mk-flannel-880673 (interfaces detected: [])
	I1014 20:17:32.811102  421087 main.go:141] libmachine: (flannel-880673) DBG | I1014 20:17:32.810990  421144 retry.go:31] will retry after 317.455974ms: waiting for domain to come up
	I1014 20:17:33.130005  421087 main.go:141] libmachine: (flannel-880673) DBG | domain flannel-880673 has defined MAC address 52:54:00:d6:0d:31 in network mk-flannel-880673
	I1014 20:17:33.130798  421087 main.go:141] libmachine: (flannel-880673) DBG | no network interface addresses found for domain flannel-880673 (source=lease)
	I1014 20:17:33.130828  421087 main.go:141] libmachine: (flannel-880673) DBG | trying to list again with source=arp
	I1014 20:17:33.131276  421087 main.go:141] libmachine: (flannel-880673) DBG | unable to find current IP address of domain flannel-880673 in network mk-flannel-880673 (interfaces detected: [])
	I1014 20:17:33.131297  421087 main.go:141] libmachine: (flannel-880673) DBG | I1014 20:17:33.131261  421144 retry.go:31] will retry after 310.529894ms: waiting for domain to come up
	I1014 20:17:33.444064  421087 main.go:141] libmachine: (flannel-880673) DBG | domain flannel-880673 has defined MAC address 52:54:00:d6:0d:31 in network mk-flannel-880673
	I1014 20:17:33.444826  421087 main.go:141] libmachine: (flannel-880673) DBG | no network interface addresses found for domain flannel-880673 (source=lease)
	I1014 20:17:33.444865  421087 main.go:141] libmachine: (flannel-880673) DBG | trying to list again with source=arp
	I1014 20:17:33.445237  421087 main.go:141] libmachine: (flannel-880673) DBG | unable to find current IP address of domain flannel-880673 in network mk-flannel-880673 (interfaces detected: [])
	I1014 20:17:33.445267  421087 main.go:141] libmachine: (flannel-880673) DBG | I1014 20:17:33.445205  421144 retry.go:31] will retry after 585.28514ms: waiting for domain to come up
	I1014 20:17:34.032915  421087 main.go:141] libmachine: (flannel-880673) DBG | domain flannel-880673 has defined MAC address 52:54:00:d6:0d:31 in network mk-flannel-880673
	I1014 20:17:34.033664  421087 main.go:141] libmachine: (flannel-880673) DBG | no network interface addresses found for domain flannel-880673 (source=lease)
	I1014 20:17:34.033693  421087 main.go:141] libmachine: (flannel-880673) DBG | trying to list again with source=arp
	I1014 20:17:34.034077  421087 main.go:141] libmachine: (flannel-880673) DBG | unable to find current IP address of domain flannel-880673 in network mk-flannel-880673 (interfaces detected: [])
	I1014 20:17:34.034129  421087 main.go:141] libmachine: (flannel-880673) DBG | I1014 20:17:34.034053  421144 retry.go:31] will retry after 747.322867ms: waiting for domain to come up
	I1014 20:17:34.783858  421087 main.go:141] libmachine: (flannel-880673) DBG | domain flannel-880673 has defined MAC address 52:54:00:d6:0d:31 in network mk-flannel-880673
	I1014 20:17:34.784696  421087 main.go:141] libmachine: (flannel-880673) DBG | no network interface addresses found for domain flannel-880673 (source=lease)
	I1014 20:17:34.784728  421087 main.go:141] libmachine: (flannel-880673) DBG | trying to list again with source=arp
	I1014 20:17:34.785194  421087 main.go:141] libmachine: (flannel-880673) DBG | unable to find current IP address of domain flannel-880673 in network mk-flannel-880673 (interfaces detected: [])
	I1014 20:17:34.785254  421087 main.go:141] libmachine: (flannel-880673) DBG | I1014 20:17:34.785162  421144 retry.go:31] will retry after 668.737068ms: waiting for domain to come up
	I1014 20:17:33.099654  421402 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1014 20:17:33.099715  421402 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21409-364627/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1014 20:17:33.099733  421402 cache.go:58] Caching tarball of preloaded images
	I1014 20:17:33.099879  421402 preload.go:233] Found /home/jenkins/minikube-integration/21409-364627/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1014 20:17:33.099896  421402 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1014 20:17:33.100050  421402 profile.go:143] Saving config to /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/bridge-880673/config.json ...
	I1014 20:17:33.100079  421402 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/bridge-880673/config.json: {Name:mk18ebb7d610401402586eb4b220796b84614a13 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 20:17:33.100282  421402 start.go:360] acquireMachinesLock for bridge-880673: {Name:mk52d449be3ec71c122454fdb0aeda759b1051fc Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1014 20:17:38.890389  418230 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1014 20:17:38.890506  418230 kubeadm.go:318] [preflight] Running pre-flight checks
	I1014 20:17:38.890678  418230 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1014 20:17:38.890809  418230 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1014 20:17:38.890950  418230 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1014 20:17:38.891038  418230 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1014 20:17:38.961932  418230 out.go:252]   - Generating certificates and keys ...
	I1014 20:17:38.962078  418230 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1014 20:17:38.962166  418230 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1014 20:17:38.962264  418230 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1014 20:17:38.962352  418230 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1014 20:17:38.962421  418230 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1014 20:17:38.962485  418230 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1014 20:17:38.962584  418230 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1014 20:17:38.962826  418230 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [enable-default-cni-880673 localhost] and IPs [192.168.72.117 127.0.0.1 ::1]
	I1014 20:17:38.962920  418230 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1014 20:17:38.963114  418230 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [enable-default-cni-880673 localhost] and IPs [192.168.72.117 127.0.0.1 ::1]
	I1014 20:17:38.963198  418230 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1014 20:17:38.963305  418230 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1014 20:17:38.963381  418230 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1014 20:17:38.963497  418230 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1014 20:17:38.963575  418230 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1014 20:17:38.963661  418230 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1014 20:17:38.963737  418230 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1014 20:17:38.963821  418230 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1014 20:17:38.963928  418230 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1014 20:17:38.964059  418230 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1014 20:17:38.964171  418230 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1014 20:17:39.026775  418230 out.go:252]   - Booting up control plane ...
	I1014 20:17:39.026925  418230 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1014 20:17:39.027023  418230 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1014 20:17:39.027117  418230 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1014 20:17:39.027269  418230 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1014 20:17:39.027422  418230 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1014 20:17:39.027596  418230 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1014 20:17:39.027733  418230 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1014 20:17:39.027789  418230 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1014 20:17:39.028004  418230 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1014 20:17:39.028177  418230 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1014 20:17:39.028268  418230 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.003674924s
	I1014 20:17:39.028418  418230 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1014 20:17:39.028554  418230 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.72.117:8443/livez
	I1014 20:17:39.028710  418230 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1014 20:17:39.028823  418230 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1014 20:17:39.028936  418230 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 2.008406755s
	I1014 20:17:39.029059  418230 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 3.666287067s
	I1014 20:17:39.029163  418230 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 5.502243083s
	I1014 20:17:39.029294  418230 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1014 20:17:39.029471  418230 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1014 20:17:39.029572  418230 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1014 20:17:39.029854  418230 kubeadm.go:318] [mark-control-plane] Marking the node enable-default-cni-880673 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1014 20:17:39.029946  418230 kubeadm.go:318] [bootstrap-token] Using token: 1mj9ds.b0l9y0w9wlsd6ew0
	I1014 20:17:39.097141  418230 out.go:252]   - Configuring RBAC rules ...
	I1014 20:17:39.097343  418230 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1014 20:17:39.097512  418230 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1014 20:17:39.097729  418230 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1014 20:17:39.097942  418230 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1014 20:17:39.098097  418230 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1014 20:17:39.098231  418230 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1014 20:17:39.098419  418230 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1014 20:17:39.098476  418230 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1014 20:17:39.098538  418230 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1014 20:17:39.098552  418230 kubeadm.go:318] 
	I1014 20:17:39.098668  418230 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1014 20:17:39.098688  418230 kubeadm.go:318] 
	I1014 20:17:39.098802  418230 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1014 20:17:39.098811  418230 kubeadm.go:318] 
	I1014 20:17:39.098840  418230 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1014 20:17:39.098905  418230 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1014 20:17:39.098975  418230 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1014 20:17:39.098984  418230 kubeadm.go:318] 
	I1014 20:17:39.099058  418230 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1014 20:17:39.099067  418230 kubeadm.go:318] 
	I1014 20:17:39.099131  418230 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1014 20:17:39.099140  418230 kubeadm.go:318] 
	I1014 20:17:39.099222  418230 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1014 20:17:39.099357  418230 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1014 20:17:39.099447  418230 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1014 20:17:39.099455  418230 kubeadm.go:318] 
	I1014 20:17:39.099561  418230 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1014 20:17:39.099699  418230 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1014 20:17:39.099718  418230 kubeadm.go:318] 
	I1014 20:17:39.099838  418230 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token 1mj9ds.b0l9y0w9wlsd6ew0 \
	I1014 20:17:39.099991  418230 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:a62c4f11982899337eae6d2ac06abdf2a9e3ffb256514381b9172598c708cb1d \
	I1014 20:17:39.100028  418230 kubeadm.go:318] 	--control-plane 
	I1014 20:17:39.100033  418230 kubeadm.go:318] 
	I1014 20:17:39.100147  418230 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1014 20:17:39.100162  418230 kubeadm.go:318] 
	I1014 20:17:39.100280  418230 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token 1mj9ds.b0l9y0w9wlsd6ew0 \
	I1014 20:17:39.100457  418230 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:a62c4f11982899337eae6d2ac06abdf2a9e3ffb256514381b9172598c708cb1d 
	I1014 20:17:39.100474  418230 cni.go:84] Creating CNI manager for "bridge"
	I1014 20:17:39.118443  418230 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I1014 20:17:35.456013  421087 main.go:141] libmachine: (flannel-880673) DBG | domain flannel-880673 has defined MAC address 52:54:00:d6:0d:31 in network mk-flannel-880673
	I1014 20:17:35.456740  421087 main.go:141] libmachine: (flannel-880673) DBG | no network interface addresses found for domain flannel-880673 (source=lease)
	I1014 20:17:35.456768  421087 main.go:141] libmachine: (flannel-880673) DBG | trying to list again with source=arp
	I1014 20:17:35.457270  421087 main.go:141] libmachine: (flannel-880673) DBG | unable to find current IP address of domain flannel-880673 in network mk-flannel-880673 (interfaces detected: [])
	I1014 20:17:35.457334  421087 main.go:141] libmachine: (flannel-880673) DBG | I1014 20:17:35.457235  421144 retry.go:31] will retry after 991.153351ms: waiting for domain to come up
	I1014 20:17:36.450676  421087 main.go:141] libmachine: (flannel-880673) DBG | domain flannel-880673 has defined MAC address 52:54:00:d6:0d:31 in network mk-flannel-880673
	I1014 20:17:36.451355  421087 main.go:141] libmachine: (flannel-880673) DBG | no network interface addresses found for domain flannel-880673 (source=lease)
	I1014 20:17:36.451390  421087 main.go:141] libmachine: (flannel-880673) DBG | trying to list again with source=arp
	I1014 20:17:36.451760  421087 main.go:141] libmachine: (flannel-880673) DBG | unable to find current IP address of domain flannel-880673 in network mk-flannel-880673 (interfaces detected: [])
	I1014 20:17:36.451811  421087 main.go:141] libmachine: (flannel-880673) DBG | I1014 20:17:36.451748  421144 retry.go:31] will retry after 1.136068871s: waiting for domain to come up
	I1014 20:17:37.589863  421087 main.go:141] libmachine: (flannel-880673) DBG | domain flannel-880673 has defined MAC address 52:54:00:d6:0d:31 in network mk-flannel-880673
	I1014 20:17:37.590717  421087 main.go:141] libmachine: (flannel-880673) DBG | no network interface addresses found for domain flannel-880673 (source=lease)
	I1014 20:17:37.590749  421087 main.go:141] libmachine: (flannel-880673) DBG | trying to list again with source=arp
	I1014 20:17:37.591025  421087 main.go:141] libmachine: (flannel-880673) DBG | unable to find current IP address of domain flannel-880673 in network mk-flannel-880673 (interfaces detected: [])
	I1014 20:17:37.591091  421087 main.go:141] libmachine: (flannel-880673) DBG | I1014 20:17:37.591024  421144 retry.go:31] will retry after 1.34377164s: waiting for domain to come up
	I1014 20:17:38.936574  421087 main.go:141] libmachine: (flannel-880673) DBG | domain flannel-880673 has defined MAC address 52:54:00:d6:0d:31 in network mk-flannel-880673
	I1014 20:17:38.937271  421087 main.go:141] libmachine: (flannel-880673) DBG | no network interface addresses found for domain flannel-880673 (source=lease)
	I1014 20:17:38.937297  421087 main.go:141] libmachine: (flannel-880673) DBG | trying to list again with source=arp
	I1014 20:17:38.937637  421087 main.go:141] libmachine: (flannel-880673) DBG | unable to find current IP address of domain flannel-880673 in network mk-flannel-880673 (interfaces detected: [])
	I1014 20:17:38.937678  421087 main.go:141] libmachine: (flannel-880673) DBG | I1014 20:17:38.937613  421144 retry.go:31] will retry after 1.860669329s: waiting for domain to come up
	I1014 20:17:39.160343  418230 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1014 20:17:39.176721  418230 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1014 20:17:39.203607  418230 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1014 20:17:39.203696  418230 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 20:17:39.203714  418230 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes enable-default-cni-880673 minikube.k8s.io/updated_at=2025_10_14T20_17_39_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=93e78e130b0f4e4236c23940daf4ba8b68c76662 minikube.k8s.io/name=enable-default-cni-880673 minikube.k8s.io/primary=true
	I1014 20:17:39.440085  418230 ops.go:34] apiserver oom_adj: -16
	I1014 20:17:39.440263  418230 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 20:17:39.940448  418230 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 20:17:40.440720  418230 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 20:17:40.940513  418230 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 20:17:41.440788  418230 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 20:17:41.940939  418230 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 20:17:42.441010  418230 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 20:17:42.940536  418230 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 20:17:43.029941  418230 kubeadm.go:1113] duration metric: took 3.826330212s to wait for elevateKubeSystemPrivileges
	I1014 20:17:43.029988  418230 kubeadm.go:402] duration metric: took 17.898921947s to StartCluster
	I1014 20:17:43.030016  418230 settings.go:142] acquiring lock: {Name:mkb488b5c777750ffd68a70b951fb5c68c216ad7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 20:17:43.030113  418230 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21409-364627/kubeconfig
	I1014 20:17:43.031904  418230 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-364627/kubeconfig: {Name:mkd77480770143eefcec102bea219ed6716fd3f5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 20:17:43.032222  418230 start.go:235] Will wait 15m0s for node &{Name: IP:192.168.72.117 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1014 20:17:43.032269  418230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1014 20:17:43.032292  418230 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1014 20:17:43.032407  418230 addons.go:69] Setting storage-provisioner=true in profile "enable-default-cni-880673"
	I1014 20:17:43.032422  418230 addons.go:238] Setting addon storage-provisioner=true in "enable-default-cni-880673"
	I1014 20:17:43.032463  418230 addons.go:69] Setting default-storageclass=true in profile "enable-default-cni-880673"
	I1014 20:17:43.032473  418230 host.go:66] Checking if "enable-default-cni-880673" exists ...
	I1014 20:17:43.032485  418230 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "enable-default-cni-880673"
	I1014 20:17:43.032490  418230 config.go:182] Loaded profile config "enable-default-cni-880673": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1014 20:17:43.032996  418230 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 20:17:43.033039  418230 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 20:17:43.033051  418230 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 20:17:43.033089  418230 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 20:17:43.037501  418230 out.go:179] * Verifying Kubernetes components...
	I1014 20:17:43.038992  418230 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 20:17:43.052189  418230 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39551
	I1014 20:17:43.052218  418230 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41315
	I1014 20:17:43.052848  418230 main.go:141] libmachine: () Calling .GetVersion
	I1014 20:17:43.052899  418230 main.go:141] libmachine: () Calling .GetVersion
	I1014 20:17:43.053421  418230 main.go:141] libmachine: Using API Version  1
	I1014 20:17:43.053449  418230 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 20:17:43.053693  418230 main.go:141] libmachine: Using API Version  1
	I1014 20:17:43.053718  418230 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 20:17:43.053804  418230 main.go:141] libmachine: () Calling .GetMachineName
	I1014 20:17:43.054063  418230 main.go:141] libmachine: () Calling .GetMachineName
	I1014 20:17:43.054246  418230 main.go:141] libmachine: (enable-default-cni-880673) Calling .GetState
	I1014 20:17:43.054403  418230 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 20:17:43.054451  418230 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 20:17:43.059677  418230 addons.go:238] Setting addon default-storageclass=true in "enable-default-cni-880673"
	I1014 20:17:43.059726  418230 host.go:66] Checking if "enable-default-cni-880673" exists ...
	I1014 20:17:43.060091  418230 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 20:17:43.060143  418230 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 20:17:43.074894  418230 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38151
	I1014 20:17:43.075565  418230 main.go:141] libmachine: () Calling .GetVersion
	I1014 20:17:43.076179  418230 main.go:141] libmachine: Using API Version  1
	I1014 20:17:43.076207  418230 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 20:17:43.076773  418230 main.go:141] libmachine: () Calling .GetMachineName
	I1014 20:17:43.077114  418230 main.go:141] libmachine: (enable-default-cni-880673) Calling .GetState
	I1014 20:17:43.078105  418230 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44993
	I1014 20:17:43.078709  418230 main.go:141] libmachine: () Calling .GetVersion
	I1014 20:17:43.079277  418230 main.go:141] libmachine: Using API Version  1
	I1014 20:17:43.079301  418230 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 20:17:43.079729  418230 main.go:141] libmachine: () Calling .GetMachineName
	I1014 20:17:43.080382  418230 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 20:17:43.080438  418230 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 20:17:43.080445  418230 main.go:141] libmachine: (enable-default-cni-880673) Calling .DriverName
	I1014 20:17:43.082407  418230 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1014 20:17:43.083573  418230 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1014 20:17:43.083596  418230 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1014 20:17:43.083626  418230 main.go:141] libmachine: (enable-default-cni-880673) Calling .GetSSHHostname
	I1014 20:17:43.088290  418230 main.go:141] libmachine: (enable-default-cni-880673) DBG | domain enable-default-cni-880673 has defined MAC address 52:54:00:e0:bd:aa in network mk-enable-default-cni-880673
	I1014 20:17:43.089020  418230 main.go:141] libmachine: (enable-default-cni-880673) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:bd:aa", ip: ""} in network mk-enable-default-cni-880673: {Iface:virbr4 ExpiryTime:2025-10-14 21:17:13 +0000 UTC Type:0 Mac:52:54:00:e0:bd:aa Iaid: IPaddr:192.168.72.117 Prefix:24 Hostname:enable-default-cni-880673 Clientid:01:52:54:00:e0:bd:aa}
	I1014 20:17:43.089061  418230 main.go:141] libmachine: (enable-default-cni-880673) DBG | domain enable-default-cni-880673 has defined IP address 192.168.72.117 and MAC address 52:54:00:e0:bd:aa in network mk-enable-default-cni-880673
	I1014 20:17:43.089382  418230 main.go:141] libmachine: (enable-default-cni-880673) Calling .GetSSHPort
	I1014 20:17:43.089653  418230 main.go:141] libmachine: (enable-default-cni-880673) Calling .GetSSHKeyPath
	I1014 20:17:43.089859  418230 main.go:141] libmachine: (enable-default-cni-880673) Calling .GetSSHUsername
	I1014 20:17:43.090060  418230 sshutil.go:53] new ssh client: &{IP:192.168.72.117 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21409-364627/.minikube/machines/enable-default-cni-880673/id_rsa Username:docker}
	I1014 20:17:43.101177  418230 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43427
	I1014 20:17:43.101898  418230 main.go:141] libmachine: () Calling .GetVersion
	I1014 20:17:43.102698  418230 main.go:141] libmachine: Using API Version  1
	I1014 20:17:43.102744  418230 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 20:17:43.103225  418230 main.go:141] libmachine: () Calling .GetMachineName
	I1014 20:17:43.103560  418230 main.go:141] libmachine: (enable-default-cni-880673) Calling .GetState
	I1014 20:17:43.106136  418230 main.go:141] libmachine: (enable-default-cni-880673) Calling .DriverName
	I1014 20:17:43.106436  418230 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1014 20:17:43.106456  418230 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1014 20:17:43.106479  418230 main.go:141] libmachine: (enable-default-cni-880673) Calling .GetSSHHostname
	I1014 20:17:43.111214  418230 main.go:141] libmachine: (enable-default-cni-880673) DBG | domain enable-default-cni-880673 has defined MAC address 52:54:00:e0:bd:aa in network mk-enable-default-cni-880673
	I1014 20:17:43.111979  418230 main.go:141] libmachine: (enable-default-cni-880673) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:bd:aa", ip: ""} in network mk-enable-default-cni-880673: {Iface:virbr4 ExpiryTime:2025-10-14 21:17:13 +0000 UTC Type:0 Mac:52:54:00:e0:bd:aa Iaid: IPaddr:192.168.72.117 Prefix:24 Hostname:enable-default-cni-880673 Clientid:01:52:54:00:e0:bd:aa}
	I1014 20:17:43.112006  418230 main.go:141] libmachine: (enable-default-cni-880673) DBG | domain enable-default-cni-880673 has defined IP address 192.168.72.117 and MAC address 52:54:00:e0:bd:aa in network mk-enable-default-cni-880673
	I1014 20:17:43.112371  418230 main.go:141] libmachine: (enable-default-cni-880673) Calling .GetSSHPort
	I1014 20:17:43.112678  418230 main.go:141] libmachine: (enable-default-cni-880673) Calling .GetSSHKeyPath
	I1014 20:17:43.112888  418230 main.go:141] libmachine: (enable-default-cni-880673) Calling .GetSSHUsername
	I1014 20:17:43.113065  418230 sshutil.go:53] new ssh client: &{IP:192.168.72.117 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21409-364627/.minikube/machines/enable-default-cni-880673/id_rsa Username:docker}
	I1014 20:17:43.261499  418230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.72.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1014 20:17:43.335764  418230 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1014 20:17:43.474105  418230 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1014 20:17:43.540259  418230 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1014 20:17:43.930623  418230 start.go:976] {"host.minikube.internal": 192.168.72.1} host record injected into CoreDNS's ConfigMap
	I1014 20:17:43.930746  418230 main.go:141] libmachine: Making call to close driver server
	I1014 20:17:43.930783  418230 main.go:141] libmachine: (enable-default-cni-880673) Calling .Close
	I1014 20:17:43.931135  418230 main.go:141] libmachine: Successfully made call to close driver server
	I1014 20:17:43.931156  418230 main.go:141] libmachine: Making call to close connection to plugin binary
	I1014 20:17:43.931171  418230 main.go:141] libmachine: Making call to close driver server
	I1014 20:17:43.931181  418230 main.go:141] libmachine: (enable-default-cni-880673) Calling .Close
	I1014 20:17:43.932134  418230 node_ready.go:35] waiting up to 15m0s for node "enable-default-cni-880673" to be "Ready" ...
	I1014 20:17:43.932297  418230 main.go:141] libmachine: (enable-default-cni-880673) DBG | Closing plugin on server side
	I1014 20:17:43.932355  418230 main.go:141] libmachine: Successfully made call to close driver server
	I1014 20:17:43.932364  418230 main.go:141] libmachine: Making call to close connection to plugin binary
	I1014 20:17:43.956853  418230 node_ready.go:49] node "enable-default-cni-880673" is "Ready"
	I1014 20:17:43.956891  418230 node_ready.go:38] duration metric: took 24.726793ms for node "enable-default-cni-880673" to be "Ready" ...
	I1014 20:17:43.956906  418230 api_server.go:52] waiting for apiserver process to appear ...
	I1014 20:17:43.957005  418230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 20:17:43.967856  418230 main.go:141] libmachine: Making call to close driver server
	I1014 20:17:43.967884  418230 main.go:141] libmachine: (enable-default-cni-880673) Calling .Close
	I1014 20:17:43.968219  418230 main.go:141] libmachine: (enable-default-cni-880673) DBG | Closing plugin on server side
	I1014 20:17:43.968265  418230 main.go:141] libmachine: Successfully made call to close driver server
	I1014 20:17:43.968273  418230 main.go:141] libmachine: Making call to close connection to plugin binary
	I1014 20:17:44.439687  418230 kapi.go:214] "coredns" deployment in "kube-system" namespace and "enable-default-cni-880673" context rescaled to 1 replicas
	I1014 20:17:44.514543  418230 api_server.go:72] duration metric: took 1.482276829s to wait for apiserver process to appear ...
	I1014 20:17:44.514577  418230 api_server.go:88] waiting for apiserver healthz status ...
	I1014 20:17:44.514600  418230 api_server.go:253] Checking apiserver healthz at https://192.168.72.117:8443/healthz ...
	I1014 20:17:44.515252  418230 main.go:141] libmachine: Making call to close driver server
	I1014 20:17:44.515327  418230 main.go:141] libmachine: (enable-default-cni-880673) Calling .Close
	I1014 20:17:44.515655  418230 main.go:141] libmachine: Successfully made call to close driver server
	I1014 20:17:44.515673  418230 main.go:141] libmachine: Making call to close connection to plugin binary
	I1014 20:17:44.515682  418230 main.go:141] libmachine: Making call to close driver server
	I1014 20:17:44.515691  418230 main.go:141] libmachine: (enable-default-cni-880673) Calling .Close
	I1014 20:17:44.516518  418230 main.go:141] libmachine: Successfully made call to close driver server
	I1014 20:17:44.516540  418230 main.go:141] libmachine: Making call to close connection to plugin binary
	I1014 20:17:44.516569  418230 main.go:141] libmachine: (enable-default-cni-880673) DBG | Closing plugin on server side
	I1014 20:17:44.519735  418230 out.go:179] * Enabled addons: default-storageclass, storage-provisioner
	I1014 20:17:40.799883  421087 main.go:141] libmachine: (flannel-880673) DBG | domain flannel-880673 has defined MAC address 52:54:00:d6:0d:31 in network mk-flannel-880673
	I1014 20:17:40.800545  421087 main.go:141] libmachine: (flannel-880673) DBG | no network interface addresses found for domain flannel-880673 (source=lease)
	I1014 20:17:40.800574  421087 main.go:141] libmachine: (flannel-880673) DBG | trying to list again with source=arp
	I1014 20:17:40.800971  421087 main.go:141] libmachine: (flannel-880673) DBG | unable to find current IP address of domain flannel-880673 in network mk-flannel-880673 (interfaces detected: [])
	I1014 20:17:40.801091  421087 main.go:141] libmachine: (flannel-880673) DBG | I1014 20:17:40.800938  421144 retry.go:31] will retry after 2.523760029s: waiting for domain to come up
	I1014 20:17:43.328085  421087 main.go:141] libmachine: (flannel-880673) DBG | domain flannel-880673 has defined MAC address 52:54:00:d6:0d:31 in network mk-flannel-880673
	I1014 20:17:43.328978  421087 main.go:141] libmachine: (flannel-880673) DBG | no network interface addresses found for domain flannel-880673 (source=lease)
	I1014 20:17:43.329008  421087 main.go:141] libmachine: (flannel-880673) DBG | trying to list again with source=arp
	I1014 20:17:43.329553  421087 main.go:141] libmachine: (flannel-880673) DBG | unable to find current IP address of domain flannel-880673 in network mk-flannel-880673 (interfaces detected: [])
	I1014 20:17:43.329587  421087 main.go:141] libmachine: (flannel-880673) DBG | I1014 20:17:43.329517  421144 retry.go:31] will retry after 3.135854458s: waiting for domain to come up
	I1014 20:17:44.520973  418230 addons.go:514] duration metric: took 1.488668063s for enable addons: enabled=[default-storageclass storage-provisioner]
	I1014 20:17:44.537377  418230 api_server.go:279] https://192.168.72.117:8443/healthz returned 200:
	ok
	I1014 20:17:44.538778  418230 api_server.go:141] control plane version: v1.34.1
	I1014 20:17:44.538817  418230 api_server.go:131] duration metric: took 24.228488ms to wait for apiserver health ...
	I1014 20:17:44.538829  418230 system_pods.go:43] waiting for kube-system pods to appear ...
	I1014 20:17:44.548431  418230 system_pods.go:59] 8 kube-system pods found
	I1014 20:17:44.548467  418230 system_pods.go:61] "coredns-66bc5c9577-489jr" [4369c2bf-dff4-4fa0-bdaa-aa27b36a6579] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1014 20:17:44.548480  418230 system_pods.go:61] "coredns-66bc5c9577-xp7vv" [ac189f38-a53f-4923-bfcb-eea2eca9a1c2] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1014 20:17:44.548491  418230 system_pods.go:61] "etcd-enable-default-cni-880673" [b81e6cb4-fa96-43f1-b280-c33dfc457749] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1014 20:17:44.548499  418230 system_pods.go:61] "kube-apiserver-enable-default-cni-880673" [2ba544f3-cb06-47cc-8a00-b5199b2f748f] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1014 20:17:44.548509  418230 system_pods.go:61] "kube-controller-manager-enable-default-cni-880673" [79e751b1-935b-4831-aa92-69e55520142e] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1014 20:17:44.548518  418230 system_pods.go:61] "kube-proxy-qm5zb" [83365521-5a92-4fb7-9ad1-653d046d8177] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1014 20:17:44.548546  418230 system_pods.go:61] "kube-scheduler-enable-default-cni-880673" [c0c210a0-2c5e-4e5f-af88-a43d7ab22b2b] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1014 20:17:44.548554  418230 system_pods.go:61] "storage-provisioner" [a7fd5b7c-9f8e-4629-93e6-0767ff496285] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1014 20:17:44.548564  418230 system_pods.go:74] duration metric: took 9.726813ms to wait for pod list to return data ...
	I1014 20:17:44.548575  418230 default_sa.go:34] waiting for default service account to be created ...
	I1014 20:17:44.556944  418230 default_sa.go:45] found service account: "default"
	I1014 20:17:44.556977  418230 default_sa.go:55] duration metric: took 8.393024ms for default service account to be created ...
	I1014 20:17:44.556993  418230 system_pods.go:116] waiting for k8s-apps to be running ...
	I1014 20:17:44.563518  418230 system_pods.go:86] 8 kube-system pods found
	I1014 20:17:44.563560  418230 system_pods.go:89] "coredns-66bc5c9577-489jr" [4369c2bf-dff4-4fa0-bdaa-aa27b36a6579] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1014 20:17:44.563570  418230 system_pods.go:89] "coredns-66bc5c9577-xp7vv" [ac189f38-a53f-4923-bfcb-eea2eca9a1c2] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1014 20:17:44.563577  418230 system_pods.go:89] "etcd-enable-default-cni-880673" [b81e6cb4-fa96-43f1-b280-c33dfc457749] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1014 20:17:44.563595  418230 system_pods.go:89] "kube-apiserver-enable-default-cni-880673" [2ba544f3-cb06-47cc-8a00-b5199b2f748f] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1014 20:17:44.563615  418230 system_pods.go:89] "kube-controller-manager-enable-default-cni-880673" [79e751b1-935b-4831-aa92-69e55520142e] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1014 20:17:44.563626  418230 system_pods.go:89] "kube-proxy-qm5zb" [83365521-5a92-4fb7-9ad1-653d046d8177] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1014 20:17:44.563639  418230 system_pods.go:89] "kube-scheduler-enable-default-cni-880673" [c0c210a0-2c5e-4e5f-af88-a43d7ab22b2b] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1014 20:17:44.563660  418230 system_pods.go:89] "storage-provisioner" [a7fd5b7c-9f8e-4629-93e6-0767ff496285] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1014 20:17:44.563710  418230 retry.go:31] will retry after 246.086816ms: missing components: kube-dns, kube-proxy
	I1014 20:17:44.817361  418230 system_pods.go:86] 8 kube-system pods found
	I1014 20:17:44.817404  418230 system_pods.go:89] "coredns-66bc5c9577-489jr" [4369c2bf-dff4-4fa0-bdaa-aa27b36a6579] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1014 20:17:44.817418  418230 system_pods.go:89] "coredns-66bc5c9577-xp7vv" [ac189f38-a53f-4923-bfcb-eea2eca9a1c2] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1014 20:17:44.817431  418230 system_pods.go:89] "etcd-enable-default-cni-880673" [b81e6cb4-fa96-43f1-b280-c33dfc457749] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1014 20:17:44.817444  418230 system_pods.go:89] "kube-apiserver-enable-default-cni-880673" [2ba544f3-cb06-47cc-8a00-b5199b2f748f] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1014 20:17:44.817454  418230 system_pods.go:89] "kube-controller-manager-enable-default-cni-880673" [79e751b1-935b-4831-aa92-69e55520142e] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1014 20:17:44.817466  418230 system_pods.go:89] "kube-proxy-qm5zb" [83365521-5a92-4fb7-9ad1-653d046d8177] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1014 20:17:44.817475  418230 system_pods.go:89] "kube-scheduler-enable-default-cni-880673" [c0c210a0-2c5e-4e5f-af88-a43d7ab22b2b] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1014 20:17:44.817486  418230 system_pods.go:89] "storage-provisioner" [a7fd5b7c-9f8e-4629-93e6-0767ff496285] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1014 20:17:44.817513  418230 retry.go:31] will retry after 303.170286ms: missing components: kube-dns, kube-proxy
	I1014 20:17:45.127070  418230 system_pods.go:86] 8 kube-system pods found
	I1014 20:17:45.127115  418230 system_pods.go:89] "coredns-66bc5c9577-489jr" [4369c2bf-dff4-4fa0-bdaa-aa27b36a6579] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1014 20:17:45.127126  418230 system_pods.go:89] "coredns-66bc5c9577-xp7vv" [ac189f38-a53f-4923-bfcb-eea2eca9a1c2] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1014 20:17:45.127135  418230 system_pods.go:89] "etcd-enable-default-cni-880673" [b81e6cb4-fa96-43f1-b280-c33dfc457749] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1014 20:17:45.127145  418230 system_pods.go:89] "kube-apiserver-enable-default-cni-880673" [2ba544f3-cb06-47cc-8a00-b5199b2f748f] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1014 20:17:45.127155  418230 system_pods.go:89] "kube-controller-manager-enable-default-cni-880673" [79e751b1-935b-4831-aa92-69e55520142e] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1014 20:17:45.127165  418230 system_pods.go:89] "kube-proxy-qm5zb" [83365521-5a92-4fb7-9ad1-653d046d8177] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1014 20:17:45.127176  418230 system_pods.go:89] "kube-scheduler-enable-default-cni-880673" [c0c210a0-2c5e-4e5f-af88-a43d7ab22b2b] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1014 20:17:45.127184  418230 system_pods.go:89] "storage-provisioner" [a7fd5b7c-9f8e-4629-93e6-0767ff496285] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1014 20:17:45.127206  418230 retry.go:31] will retry after 461.46354ms: missing components: kube-dns, kube-proxy
	I1014 20:17:45.594052  418230 system_pods.go:86] 7 kube-system pods found
	I1014 20:17:45.594089  418230 system_pods.go:89] "coredns-66bc5c9577-489jr" [4369c2bf-dff4-4fa0-bdaa-aa27b36a6579] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1014 20:17:45.594100  418230 system_pods.go:89] "etcd-enable-default-cni-880673" [b81e6cb4-fa96-43f1-b280-c33dfc457749] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1014 20:17:45.594111  418230 system_pods.go:89] "kube-apiserver-enable-default-cni-880673" [2ba544f3-cb06-47cc-8a00-b5199b2f748f] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1014 20:17:45.594120  418230 system_pods.go:89] "kube-controller-manager-enable-default-cni-880673" [79e751b1-935b-4831-aa92-69e55520142e] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1014 20:17:45.594127  418230 system_pods.go:89] "kube-proxy-qm5zb" [83365521-5a92-4fb7-9ad1-653d046d8177] Running
	I1014 20:17:45.594134  418230 system_pods.go:89] "kube-scheduler-enable-default-cni-880673" [c0c210a0-2c5e-4e5f-af88-a43d7ab22b2b] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1014 20:17:45.594138  418230 system_pods.go:89] "storage-provisioner" [a7fd5b7c-9f8e-4629-93e6-0767ff496285] Running
	I1014 20:17:45.594149  418230 system_pods.go:126] duration metric: took 1.037148596s to wait for k8s-apps to be running ...
	I1014 20:17:45.594158  418230 system_svc.go:44] waiting for kubelet service to be running ....
	I1014 20:17:45.594210  418230 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1014 20:17:45.614925  418230 system_svc.go:56] duration metric: took 20.752932ms WaitForService to wait for kubelet
	I1014 20:17:45.614960  418230 kubeadm.go:586] duration metric: took 2.582700783s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1014 20:17:45.614986  418230 node_conditions.go:102] verifying NodePressure condition ...
	I1014 20:17:45.618645  418230 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1014 20:17:45.618682  418230 node_conditions.go:123] node cpu capacity is 2
	I1014 20:17:45.618697  418230 node_conditions.go:105] duration metric: took 3.70399ms to run NodePressure ...
	I1014 20:17:45.618713  418230 start.go:241] waiting for startup goroutines ...
	I1014 20:17:45.618723  418230 start.go:246] waiting for cluster config update ...
	I1014 20:17:45.618738  418230 start.go:255] writing updated cluster config ...
	I1014 20:17:45.619091  418230 ssh_runner.go:195] Run: rm -f paused
	I1014 20:17:45.624700  418230 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1014 20:17:45.629756  418230 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-489jr" in "kube-system" namespace to be "Ready" or be gone ...
	I1014 20:17:46.466713  421087 main.go:141] libmachine: (flannel-880673) DBG | domain flannel-880673 has defined MAC address 52:54:00:d6:0d:31 in network mk-flannel-880673
	I1014 20:17:46.467493  421087 main.go:141] libmachine: (flannel-880673) DBG | no network interface addresses found for domain flannel-880673 (source=lease)
	I1014 20:17:46.467523  421087 main.go:141] libmachine: (flannel-880673) DBG | trying to list again with source=arp
	I1014 20:17:46.467849  421087 main.go:141] libmachine: (flannel-880673) DBG | unable to find current IP address of domain flannel-880673 in network mk-flannel-880673 (interfaces detected: [])
	I1014 20:17:46.467923  421087 main.go:141] libmachine: (flannel-880673) DBG | I1014 20:17:46.467854  421144 retry.go:31] will retry after 3.337883952s: waiting for domain to come up
	I1014 20:17:49.808402  421087 main.go:141] libmachine: (flannel-880673) DBG | domain flannel-880673 has defined MAC address 52:54:00:d6:0d:31 in network mk-flannel-880673
	I1014 20:17:49.809278  421087 main.go:141] libmachine: (flannel-880673) DBG | domain flannel-880673 has current primary IP address 192.168.39.78 and MAC address 52:54:00:d6:0d:31 in network mk-flannel-880673
	I1014 20:17:49.809328  421087 main.go:141] libmachine: (flannel-880673) found domain IP: 192.168.39.78
	I1014 20:17:49.809341  421087 main.go:141] libmachine: (flannel-880673) reserving static IP address...
	I1014 20:17:49.809868  421087 main.go:141] libmachine: (flannel-880673) DBG | unable to find host DHCP lease matching {name: "flannel-880673", mac: "52:54:00:d6:0d:31", ip: "192.168.39.78"} in network mk-flannel-880673
	I1014 20:17:51.614250  421402 start.go:364] duration metric: took 18.513922492s to acquireMachinesLock for "bridge-880673"
	I1014 20:17:51.614349  421402 start.go:93] Provisioning new machine with config: &{Name:bridge-880673 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.1 ClusterName:bridge-880673 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Binar
yMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1014 20:17:51.614493  421402 start.go:125] createHost starting for "" (driver="kvm2")
	W1014 20:17:47.636567  418230 pod_ready.go:104] pod "coredns-66bc5c9577-489jr" is not "Ready", error: <nil>
	W1014 20:17:49.643515  418230 pod_ready.go:104] pod "coredns-66bc5c9577-489jr" is not "Ready", error: <nil>
	I1014 20:17:51.617418  421402 out.go:252] * Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1014 20:17:51.617665  421402 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 20:17:51.617720  421402 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 20:17:51.636823  421402 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36455
	I1014 20:17:51.637339  421402 main.go:141] libmachine: () Calling .GetVersion
	I1014 20:17:51.637929  421402 main.go:141] libmachine: Using API Version  1
	I1014 20:17:51.637957  421402 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 20:17:51.638388  421402 main.go:141] libmachine: () Calling .GetMachineName
	I1014 20:17:51.638627  421402 main.go:141] libmachine: (bridge-880673) Calling .GetMachineName
	I1014 20:17:51.638800  421402 main.go:141] libmachine: (bridge-880673) Calling .DriverName
	I1014 20:17:51.638988  421402 start.go:159] libmachine.API.Create for "bridge-880673" (driver="kvm2")
	I1014 20:17:51.639020  421402 client.go:168] LocalClient.Create starting
	I1014 20:17:51.639055  421402 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21409-364627/.minikube/certs/ca.pem
	I1014 20:17:51.639092  421402 main.go:141] libmachine: Decoding PEM data...
	I1014 20:17:51.639111  421402 main.go:141] libmachine: Parsing certificate...
	I1014 20:17:51.639181  421402 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21409-364627/.minikube/certs/cert.pem
	I1014 20:17:51.639218  421402 main.go:141] libmachine: Decoding PEM data...
	I1014 20:17:51.639252  421402 main.go:141] libmachine: Parsing certificate...
	I1014 20:17:51.639278  421402 main.go:141] libmachine: Running pre-create checks...
	I1014 20:17:51.639290  421402 main.go:141] libmachine: (bridge-880673) Calling .PreCreateCheck
	I1014 20:17:51.639675  421402 main.go:141] libmachine: (bridge-880673) Calling .GetConfigRaw
	I1014 20:17:51.640139  421402 main.go:141] libmachine: Creating machine...
	I1014 20:17:51.640157  421402 main.go:141] libmachine: (bridge-880673) Calling .Create
	I1014 20:17:51.640289  421402 main.go:141] libmachine: (bridge-880673) creating domain...
	I1014 20:17:51.640351  421402 main.go:141] libmachine: (bridge-880673) creating network...
	I1014 20:17:51.641677  421402 main.go:141] libmachine: (bridge-880673) DBG | found existing default network
	I1014 20:17:51.641912  421402 main.go:141] libmachine: (bridge-880673) DBG | <network connections='3'>
	I1014 20:17:51.641935  421402 main.go:141] libmachine: (bridge-880673) DBG |   <name>default</name>
	I1014 20:17:51.641947  421402 main.go:141] libmachine: (bridge-880673) DBG |   <uuid>c61344c2-dba2-46dd-a21a-34776d235985</uuid>
	I1014 20:17:51.641971  421402 main.go:141] libmachine: (bridge-880673) DBG |   <forward mode='nat'>
	I1014 20:17:51.641984  421402 main.go:141] libmachine: (bridge-880673) DBG |     <nat>
	I1014 20:17:51.641993  421402 main.go:141] libmachine: (bridge-880673) DBG |       <port start='1024' end='65535'/>
	I1014 20:17:51.642005  421402 main.go:141] libmachine: (bridge-880673) DBG |     </nat>
	I1014 20:17:51.642012  421402 main.go:141] libmachine: (bridge-880673) DBG |   </forward>
	I1014 20:17:51.642025  421402 main.go:141] libmachine: (bridge-880673) DBG |   <bridge name='virbr0' stp='on' delay='0'/>
	I1014 20:17:51.642037  421402 main.go:141] libmachine: (bridge-880673) DBG |   <mac address='52:54:00:10:a2:1d'/>
	I1014 20:17:51.642047  421402 main.go:141] libmachine: (bridge-880673) DBG |   <ip address='192.168.122.1' netmask='255.255.255.0'>
	I1014 20:17:51.642054  421402 main.go:141] libmachine: (bridge-880673) DBG |     <dhcp>
	I1014 20:17:51.642086  421402 main.go:141] libmachine: (bridge-880673) DBG |       <range start='192.168.122.2' end='192.168.122.254'/>
	I1014 20:17:51.642109  421402 main.go:141] libmachine: (bridge-880673) DBG |     </dhcp>
	I1014 20:17:51.642131  421402 main.go:141] libmachine: (bridge-880673) DBG |   </ip>
	I1014 20:17:51.642138  421402 main.go:141] libmachine: (bridge-880673) DBG | </network>
	I1014 20:17:51.642149  421402 main.go:141] libmachine: (bridge-880673) DBG | 
	I1014 20:17:51.643014  421402 main.go:141] libmachine: (bridge-880673) DBG | I1014 20:17:51.642867  422003 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:5d:dc:bd} reservation:<nil>}
	I1014 20:17:51.643548  421402 main.go:141] libmachine: (bridge-880673) DBG | I1014 20:17:51.643456  422003 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:05:7c:de} reservation:<nil>}
	I1014 20:17:51.644332  421402 main.go:141] libmachine: (bridge-880673) DBG | I1014 20:17:51.644240  422003 network.go:206] using free private subnet 192.168.61.0/24: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00025eb90}
	I1014 20:17:51.644380  421402 main.go:141] libmachine: (bridge-880673) DBG | defining private network:
	I1014 20:17:51.644402  421402 main.go:141] libmachine: (bridge-880673) DBG | 
	I1014 20:17:51.644416  421402 main.go:141] libmachine: (bridge-880673) DBG | <network>
	I1014 20:17:51.644428  421402 main.go:141] libmachine: (bridge-880673) DBG |   <name>mk-bridge-880673</name>
	I1014 20:17:51.644441  421402 main.go:141] libmachine: (bridge-880673) DBG |   <dns enable='no'/>
	I1014 20:17:51.644456  421402 main.go:141] libmachine: (bridge-880673) DBG |   <ip address='192.168.61.1' netmask='255.255.255.0'>
	I1014 20:17:51.644468  421402 main.go:141] libmachine: (bridge-880673) DBG |     <dhcp>
	I1014 20:17:51.644478  421402 main.go:141] libmachine: (bridge-880673) DBG |       <range start='192.168.61.2' end='192.168.61.253'/>
	I1014 20:17:51.644502  421402 main.go:141] libmachine: (bridge-880673) DBG |     </dhcp>
	I1014 20:17:51.644524  421402 main.go:141] libmachine: (bridge-880673) DBG |   </ip>
	I1014 20:17:51.644536  421402 main.go:141] libmachine: (bridge-880673) DBG | </network>
	I1014 20:17:51.644546  421402 main.go:141] libmachine: (bridge-880673) DBG | 
	I1014 20:17:51.650988  421402 main.go:141] libmachine: (bridge-880673) DBG | creating private network mk-bridge-880673 192.168.61.0/24...
	I1014 20:17:51.727429  421402 main.go:141] libmachine: (bridge-880673) DBG | private network mk-bridge-880673 192.168.61.0/24 created
	I1014 20:17:51.727727  421402 main.go:141] libmachine: (bridge-880673) DBG | <network>
	I1014 20:17:51.727743  421402 main.go:141] libmachine: (bridge-880673) DBG |   <name>mk-bridge-880673</name>
	I1014 20:17:51.727754  421402 main.go:141] libmachine: (bridge-880673) DBG |   <uuid>ecd63ac0-f4e0-4f34-a66c-58986d00c010</uuid>
	I1014 20:17:51.727765  421402 main.go:141] libmachine: (bridge-880673) DBG |   <bridge name='virbr3' stp='on' delay='0'/>
	I1014 20:17:51.727777  421402 main.go:141] libmachine: (bridge-880673) setting up store path in /home/jenkins/minikube-integration/21409-364627/.minikube/machines/bridge-880673 ...
	I1014 20:17:51.727797  421402 main.go:141] libmachine: (bridge-880673) building disk image from file:///home/jenkins/minikube-integration/21409-364627/.minikube/cache/iso/amd64/minikube-v1.37.0-1758198818-20370-amd64.iso
	I1014 20:17:51.727810  421402 main.go:141] libmachine: (bridge-880673) DBG |   <mac address='52:54:00:71:72:11'/>
	I1014 20:17:51.727820  421402 main.go:141] libmachine: (bridge-880673) DBG |   <dns enable='no'/>
	I1014 20:17:51.727826  421402 main.go:141] libmachine: (bridge-880673) DBG |   <ip address='192.168.61.1' netmask='255.255.255.0'>
	I1014 20:17:51.727833  421402 main.go:141] libmachine: (bridge-880673) DBG |     <dhcp>
	I1014 20:17:51.727842  421402 main.go:141] libmachine: (bridge-880673) DBG |       <range start='192.168.61.2' end='192.168.61.253'/>
	I1014 20:17:51.727889  421402 main.go:141] libmachine: (bridge-880673) DBG |     </dhcp>
	I1014 20:17:51.727920  421402 main.go:141] libmachine: (bridge-880673) DBG |   </ip>
	I1014 20:17:51.727951  421402 main.go:141] libmachine: (bridge-880673) Downloading /home/jenkins/minikube-integration/21409-364627/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/21409-364627/.minikube/cache/iso/amd64/minikube-v1.37.0-1758198818-20370-amd64.iso...
	I1014 20:17:51.727966  421402 main.go:141] libmachine: (bridge-880673) DBG | </network>
	I1014 20:17:51.727988  421402 main.go:141] libmachine: (bridge-880673) DBG | 
	I1014 20:17:51.728007  421402 main.go:141] libmachine: (bridge-880673) DBG | I1014 20:17:51.727751  422003 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/21409-364627/.minikube
	I1014 20:17:52.004395  421402 main.go:141] libmachine: (bridge-880673) DBG | I1014 20:17:52.004250  422003 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/21409-364627/.minikube/machines/bridge-880673/id_rsa...
	I1014 20:17:52.087668  421402 main.go:141] libmachine: (bridge-880673) DBG | I1014 20:17:52.087546  422003 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/21409-364627/.minikube/machines/bridge-880673/bridge-880673.rawdisk...
	I1014 20:17:52.087697  421402 main.go:141] libmachine: (bridge-880673) DBG | Writing magic tar header
	I1014 20:17:52.087707  421402 main.go:141] libmachine: (bridge-880673) DBG | Writing SSH key tar header
	I1014 20:17:52.087803  421402 main.go:141] libmachine: (bridge-880673) DBG | I1014 20:17:52.087707  422003 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/21409-364627/.minikube/machines/bridge-880673 ...
	I1014 20:17:52.087897  421402 main.go:141] libmachine: (bridge-880673) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21409-364627/.minikube/machines/bridge-880673
	I1014 20:17:52.087924  421402 main.go:141] libmachine: (bridge-880673) setting executable bit set on /home/jenkins/minikube-integration/21409-364627/.minikube/machines/bridge-880673 (perms=drwx------)
	I1014 20:17:52.087937  421402 main.go:141] libmachine: (bridge-880673) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21409-364627/.minikube/machines
	I1014 20:17:52.087957  421402 main.go:141] libmachine: (bridge-880673) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21409-364627/.minikube
	I1014 20:17:52.087971  421402 main.go:141] libmachine: (bridge-880673) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21409-364627
	I1014 20:17:52.087993  421402 main.go:141] libmachine: (bridge-880673) DBG | checking permissions on dir: /home/jenkins/minikube-integration
	I1014 20:17:52.088005  421402 main.go:141] libmachine: (bridge-880673) DBG | checking permissions on dir: /home/jenkins
	I1014 20:17:52.088019  421402 main.go:141] libmachine: (bridge-880673) setting executable bit set on /home/jenkins/minikube-integration/21409-364627/.minikube/machines (perms=drwxr-xr-x)
	I1014 20:17:52.088038  421402 main.go:141] libmachine: (bridge-880673) setting executable bit set on /home/jenkins/minikube-integration/21409-364627/.minikube (perms=drwxr-xr-x)
	I1014 20:17:52.088051  421402 main.go:141] libmachine: (bridge-880673) setting executable bit set on /home/jenkins/minikube-integration/21409-364627 (perms=drwxrwxr-x)
	I1014 20:17:52.088064  421402 main.go:141] libmachine: (bridge-880673) setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1014 20:17:52.088076  421402 main.go:141] libmachine: (bridge-880673) DBG | checking permissions on dir: /home
	I1014 20:17:52.088086  421402 main.go:141] libmachine: (bridge-880673) setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1014 20:17:52.088162  421402 main.go:141] libmachine: (bridge-880673) DBG | skipping /home - not owner
	I1014 20:17:52.088179  421402 main.go:141] libmachine: (bridge-880673) defining domain...
	I1014 20:17:52.089554  421402 main.go:141] libmachine: (bridge-880673) defining domain using XML: 
	I1014 20:17:52.089570  421402 main.go:141] libmachine: (bridge-880673) <domain type='kvm'>
	I1014 20:17:52.089576  421402 main.go:141] libmachine: (bridge-880673)   <name>bridge-880673</name>
	I1014 20:17:52.089580  421402 main.go:141] libmachine: (bridge-880673)   <memory unit='MiB'>3072</memory>
	I1014 20:17:52.089585  421402 main.go:141] libmachine: (bridge-880673)   <vcpu>2</vcpu>
	I1014 20:17:52.089589  421402 main.go:141] libmachine: (bridge-880673)   <features>
	I1014 20:17:52.089593  421402 main.go:141] libmachine: (bridge-880673)     <acpi/>
	I1014 20:17:52.089597  421402 main.go:141] libmachine: (bridge-880673)     <apic/>
	I1014 20:17:52.089614  421402 main.go:141] libmachine: (bridge-880673)     <pae/>
	I1014 20:17:52.089621  421402 main.go:141] libmachine: (bridge-880673)   </features>
	I1014 20:17:52.089626  421402 main.go:141] libmachine: (bridge-880673)   <cpu mode='host-passthrough'>
	I1014 20:17:52.089630  421402 main.go:141] libmachine: (bridge-880673)   </cpu>
	I1014 20:17:52.089635  421402 main.go:141] libmachine: (bridge-880673)   <os>
	I1014 20:17:52.089639  421402 main.go:141] libmachine: (bridge-880673)     <type>hvm</type>
	I1014 20:17:52.089643  421402 main.go:141] libmachine: (bridge-880673)     <boot dev='cdrom'/>
	I1014 20:17:52.089655  421402 main.go:141] libmachine: (bridge-880673)     <boot dev='hd'/>
	I1014 20:17:52.089663  421402 main.go:141] libmachine: (bridge-880673)     <bootmenu enable='no'/>
	I1014 20:17:52.089672  421402 main.go:141] libmachine: (bridge-880673)   </os>
	I1014 20:17:52.089683  421402 main.go:141] libmachine: (bridge-880673)   <devices>
	I1014 20:17:52.089690  421402 main.go:141] libmachine: (bridge-880673)     <disk type='file' device='cdrom'>
	I1014 20:17:52.089698  421402 main.go:141] libmachine: (bridge-880673)       <source file='/home/jenkins/minikube-integration/21409-364627/.minikube/machines/bridge-880673/boot2docker.iso'/>
	I1014 20:17:52.089705  421402 main.go:141] libmachine: (bridge-880673)       <target dev='hdc' bus='scsi'/>
	I1014 20:17:52.089710  421402 main.go:141] libmachine: (bridge-880673)       <readonly/>
	I1014 20:17:52.089713  421402 main.go:141] libmachine: (bridge-880673)     </disk>
	I1014 20:17:52.089719  421402 main.go:141] libmachine: (bridge-880673)     <disk type='file' device='disk'>
	I1014 20:17:52.089726  421402 main.go:141] libmachine: (bridge-880673)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1014 20:17:52.089738  421402 main.go:141] libmachine: (bridge-880673)       <source file='/home/jenkins/minikube-integration/21409-364627/.minikube/machines/bridge-880673/bridge-880673.rawdisk'/>
	I1014 20:17:52.089749  421402 main.go:141] libmachine: (bridge-880673)       <target dev='hda' bus='virtio'/>
	I1014 20:17:52.089758  421402 main.go:141] libmachine: (bridge-880673)     </disk>
	I1014 20:17:52.089767  421402 main.go:141] libmachine: (bridge-880673)     <interface type='network'>
	I1014 20:17:52.089774  421402 main.go:141] libmachine: (bridge-880673)       <source network='mk-bridge-880673'/>
	I1014 20:17:52.089780  421402 main.go:141] libmachine: (bridge-880673)       <model type='virtio'/>
	I1014 20:17:52.089784  421402 main.go:141] libmachine: (bridge-880673)     </interface>
	I1014 20:17:52.089791  421402 main.go:141] libmachine: (bridge-880673)     <interface type='network'>
	I1014 20:17:52.089796  421402 main.go:141] libmachine: (bridge-880673)       <source network='default'/>
	I1014 20:17:52.089804  421402 main.go:141] libmachine: (bridge-880673)       <model type='virtio'/>
	I1014 20:17:52.089812  421402 main.go:141] libmachine: (bridge-880673)     </interface>
	I1014 20:17:52.089824  421402 main.go:141] libmachine: (bridge-880673)     <serial type='pty'>
	I1014 20:17:52.089832  421402 main.go:141] libmachine: (bridge-880673)       <target port='0'/>
	I1014 20:17:52.089841  421402 main.go:141] libmachine: (bridge-880673)     </serial>
	I1014 20:17:52.089850  421402 main.go:141] libmachine: (bridge-880673)     <console type='pty'>
	I1014 20:17:52.089864  421402 main.go:141] libmachine: (bridge-880673)       <target type='serial' port='0'/>
	I1014 20:17:52.089872  421402 main.go:141] libmachine: (bridge-880673)     </console>
	I1014 20:17:52.089876  421402 main.go:141] libmachine: (bridge-880673)     <rng model='virtio'>
	I1014 20:17:52.089889  421402 main.go:141] libmachine: (bridge-880673)       <backend model='random'>/dev/random</backend>
	I1014 20:17:52.089895  421402 main.go:141] libmachine: (bridge-880673)     </rng>
	I1014 20:17:52.089912  421402 main.go:141] libmachine: (bridge-880673)   </devices>
	I1014 20:17:52.089925  421402 main.go:141] libmachine: (bridge-880673) </domain>
	I1014 20:17:52.089935  421402 main.go:141] libmachine: (bridge-880673) 
	I1014 20:17:52.095220  421402 main.go:141] libmachine: (bridge-880673) DBG | domain bridge-880673 has defined MAC address 52:54:00:35:9e:82 in network default
	I1014 20:17:52.095963  421402 main.go:141] libmachine: (bridge-880673) DBG | domain bridge-880673 has defined MAC address 52:54:00:21:00:20 in network mk-bridge-880673
	I1014 20:17:52.096002  421402 main.go:141] libmachine: (bridge-880673) starting domain...
	I1014 20:17:52.096015  421402 main.go:141] libmachine: (bridge-880673) ensuring networks are active...
	I1014 20:17:52.096955  421402 main.go:141] libmachine: (bridge-880673) Ensuring network default is active
	I1014 20:17:52.097463  421402 main.go:141] libmachine: (bridge-880673) Ensuring network mk-bridge-880673 is active
	I1014 20:17:52.098259  421402 main.go:141] libmachine: (bridge-880673) getting domain XML...
	I1014 20:17:52.099848  421402 main.go:141] libmachine: (bridge-880673) DBG | starting domain XML:
	I1014 20:17:52.099871  421402 main.go:141] libmachine: (bridge-880673) DBG | <domain type='kvm'>
	I1014 20:17:52.099883  421402 main.go:141] libmachine: (bridge-880673) DBG |   <name>bridge-880673</name>
	I1014 20:17:52.099893  421402 main.go:141] libmachine: (bridge-880673) DBG |   <uuid>b2be856d-0946-4eb5-be70-c1a4965dcc84</uuid>
	I1014 20:17:52.099906  421402 main.go:141] libmachine: (bridge-880673) DBG |   <memory unit='KiB'>3145728</memory>
	I1014 20:17:52.099914  421402 main.go:141] libmachine: (bridge-880673) DBG |   <currentMemory unit='KiB'>3145728</currentMemory>
	I1014 20:17:52.099927  421402 main.go:141] libmachine: (bridge-880673) DBG |   <vcpu placement='static'>2</vcpu>
	I1014 20:17:52.099938  421402 main.go:141] libmachine: (bridge-880673) DBG |   <os>
	I1014 20:17:52.099948  421402 main.go:141] libmachine: (bridge-880673) DBG |     <type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
	I1014 20:17:52.099972  421402 main.go:141] libmachine: (bridge-880673) DBG |     <boot dev='cdrom'/>
	I1014 20:17:52.099983  421402 main.go:141] libmachine: (bridge-880673) DBG |     <boot dev='hd'/>
	I1014 20:17:52.099990  421402 main.go:141] libmachine: (bridge-880673) DBG |     <bootmenu enable='no'/>
	I1014 20:17:52.099999  421402 main.go:141] libmachine: (bridge-880673) DBG |   </os>
	I1014 20:17:52.100006  421402 main.go:141] libmachine: (bridge-880673) DBG |   <features>
	I1014 20:17:52.100044  421402 main.go:141] libmachine: (bridge-880673) DBG |     <acpi/>
	I1014 20:17:52.100071  421402 main.go:141] libmachine: (bridge-880673) DBG |     <apic/>
	I1014 20:17:52.100093  421402 main.go:141] libmachine: (bridge-880673) DBG |     <pae/>
	I1014 20:17:52.100120  421402 main.go:141] libmachine: (bridge-880673) DBG |   </features>
	I1014 20:17:52.100137  421402 main.go:141] libmachine: (bridge-880673) DBG |   <cpu mode='host-passthrough' check='none' migratable='on'/>
	I1014 20:17:52.100152  421402 main.go:141] libmachine: (bridge-880673) DBG |   <clock offset='utc'/>
	I1014 20:17:52.100167  421402 main.go:141] libmachine: (bridge-880673) DBG |   <on_poweroff>destroy</on_poweroff>
	I1014 20:17:52.100188  421402 main.go:141] libmachine: (bridge-880673) DBG |   <on_reboot>restart</on_reboot>
	I1014 20:17:52.100210  421402 main.go:141] libmachine: (bridge-880673) DBG |   <on_crash>destroy</on_crash>
	I1014 20:17:52.100226  421402 main.go:141] libmachine: (bridge-880673) DBG |   <devices>
	I1014 20:17:52.100241  421402 main.go:141] libmachine: (bridge-880673) DBG |     <emulator>/usr/bin/qemu-system-x86_64</emulator>
	I1014 20:17:52.100260  421402 main.go:141] libmachine: (bridge-880673) DBG |     <disk type='file' device='cdrom'>
	I1014 20:17:52.100276  421402 main.go:141] libmachine: (bridge-880673) DBG |       <driver name='qemu' type='raw'/>
	I1014 20:17:52.100292  421402 main.go:141] libmachine: (bridge-880673) DBG |       <source file='/home/jenkins/minikube-integration/21409-364627/.minikube/machines/bridge-880673/boot2docker.iso'/>
	I1014 20:17:52.100307  421402 main.go:141] libmachine: (bridge-880673) DBG |       <target dev='hdc' bus='scsi'/>
	I1014 20:17:52.100344  421402 main.go:141] libmachine: (bridge-880673) DBG |       <readonly/>
	I1014 20:17:52.100375  421402 main.go:141] libmachine: (bridge-880673) DBG |       <address type='drive' controller='0' bus='0' target='0' unit='2'/>
	I1014 20:17:52.100427  421402 main.go:141] libmachine: (bridge-880673) DBG |     </disk>
	I1014 20:17:52.100446  421402 main.go:141] libmachine: (bridge-880673) DBG |     <disk type='file' device='disk'>
	I1014 20:17:52.100454  421402 main.go:141] libmachine: (bridge-880673) DBG |       <driver name='qemu' type='raw' io='threads'/>
	I1014 20:17:52.100467  421402 main.go:141] libmachine: (bridge-880673) DBG |       <source file='/home/jenkins/minikube-integration/21409-364627/.minikube/machines/bridge-880673/bridge-880673.rawdisk'/>
	I1014 20:17:52.100480  421402 main.go:141] libmachine: (bridge-880673) DBG |       <target dev='hda' bus='virtio'/>
	I1014 20:17:52.100496  421402 main.go:141] libmachine: (bridge-880673) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
	I1014 20:17:52.100513  421402 main.go:141] libmachine: (bridge-880673) DBG |     </disk>
	I1014 20:17:52.100523  421402 main.go:141] libmachine: (bridge-880673) DBG |     <controller type='usb' index='0' model='piix3-uhci'>
	I1014 20:17:52.100534  421402 main.go:141] libmachine: (bridge-880673) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
	I1014 20:17:52.100555  421402 main.go:141] libmachine: (bridge-880673) DBG |     </controller>
	I1014 20:17:52.100573  421402 main.go:141] libmachine: (bridge-880673) DBG |     <controller type='pci' index='0' model='pci-root'/>
	I1014 20:17:52.100587  421402 main.go:141] libmachine: (bridge-880673) DBG |     <controller type='scsi' index='0' model='lsilogic'>
	I1014 20:17:52.100599  421402 main.go:141] libmachine: (bridge-880673) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
	I1014 20:17:52.100610  421402 main.go:141] libmachine: (bridge-880673) DBG |     </controller>
	I1014 20:17:52.100618  421402 main.go:141] libmachine: (bridge-880673) DBG |     <interface type='network'>
	I1014 20:17:52.100629  421402 main.go:141] libmachine: (bridge-880673) DBG |       <mac address='52:54:00:21:00:20'/>
	I1014 20:17:52.100639  421402 main.go:141] libmachine: (bridge-880673) DBG |       <source network='mk-bridge-880673'/>
	I1014 20:17:52.100681  421402 main.go:141] libmachine: (bridge-880673) DBG |       <model type='virtio'/>
	I1014 20:17:52.100709  421402 main.go:141] libmachine: (bridge-880673) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
	I1014 20:17:52.100718  421402 main.go:141] libmachine: (bridge-880673) DBG |     </interface>
	I1014 20:17:52.100728  421402 main.go:141] libmachine: (bridge-880673) DBG |     <interface type='network'>
	I1014 20:17:52.100737  421402 main.go:141] libmachine: (bridge-880673) DBG |       <mac address='52:54:00:35:9e:82'/>
	I1014 20:17:52.100747  421402 main.go:141] libmachine: (bridge-880673) DBG |       <source network='default'/>
	I1014 20:17:52.100755  421402 main.go:141] libmachine: (bridge-880673) DBG |       <model type='virtio'/>
	I1014 20:17:52.100768  421402 main.go:141] libmachine: (bridge-880673) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
	I1014 20:17:52.100790  421402 main.go:141] libmachine: (bridge-880673) DBG |     </interface>
	I1014 20:17:52.100807  421402 main.go:141] libmachine: (bridge-880673) DBG |     <serial type='pty'>
	I1014 20:17:52.100818  421402 main.go:141] libmachine: (bridge-880673) DBG |       <target type='isa-serial' port='0'>
	I1014 20:17:52.100828  421402 main.go:141] libmachine: (bridge-880673) DBG |         <model name='isa-serial'/>
	I1014 20:17:52.100836  421402 main.go:141] libmachine: (bridge-880673) DBG |       </target>
	I1014 20:17:52.100846  421402 main.go:141] libmachine: (bridge-880673) DBG |     </serial>
	I1014 20:17:52.100854  421402 main.go:141] libmachine: (bridge-880673) DBG |     <console type='pty'>
	I1014 20:17:52.100863  421402 main.go:141] libmachine: (bridge-880673) DBG |       <target type='serial' port='0'/>
	I1014 20:17:52.100871  421402 main.go:141] libmachine: (bridge-880673) DBG |     </console>
	I1014 20:17:52.100885  421402 main.go:141] libmachine: (bridge-880673) DBG |     <input type='mouse' bus='ps2'/>
	I1014 20:17:52.100898  421402 main.go:141] libmachine: (bridge-880673) DBG |     <input type='keyboard' bus='ps2'/>
	I1014 20:17:52.100906  421402 main.go:141] libmachine: (bridge-880673) DBG |     <audio id='1' type='none'/>
	I1014 20:17:52.100919  421402 main.go:141] libmachine: (bridge-880673) DBG |     <memballoon model='virtio'>
	I1014 20:17:52.100931  421402 main.go:141] libmachine: (bridge-880673) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
	I1014 20:17:52.100946  421402 main.go:141] libmachine: (bridge-880673) DBG |     </memballoon>
	I1014 20:17:52.100960  421402 main.go:141] libmachine: (bridge-880673) DBG |     <rng model='virtio'>
	I1014 20:17:52.100991  421402 main.go:141] libmachine: (bridge-880673) DBG |       <backend model='random'>/dev/random</backend>
	I1014 20:17:52.101015  421402 main.go:141] libmachine: (bridge-880673) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
	I1014 20:17:52.101027  421402 main.go:141] libmachine: (bridge-880673) DBG |     </rng>
	I1014 20:17:52.101036  421402 main.go:141] libmachine: (bridge-880673) DBG |   </devices>
	I1014 20:17:52.101045  421402 main.go:141] libmachine: (bridge-880673) DBG | </domain>
	I1014 20:17:52.101054  421402 main.go:141] libmachine: (bridge-880673) DBG | 
	I1014 20:17:50.101459  421087 main.go:141] libmachine: (flannel-880673) reserved static IP address 192.168.39.78 for domain flannel-880673
	I1014 20:17:50.101509  421087 main.go:141] libmachine: (flannel-880673) DBG | Getting to WaitForSSH function...
	I1014 20:17:50.101518  421087 main.go:141] libmachine: (flannel-880673) waiting for SSH...
	I1014 20:17:50.105228  421087 main.go:141] libmachine: (flannel-880673) DBG | domain flannel-880673 has defined MAC address 52:54:00:d6:0d:31 in network mk-flannel-880673
	I1014 20:17:50.105867  421087 main.go:141] libmachine: (flannel-880673) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:0d:31", ip: ""} in network mk-flannel-880673: {Iface:virbr2 ExpiryTime:2025-10-14 21:17:46 +0000 UTC Type:0 Mac:52:54:00:d6:0d:31 Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:minikube Clientid:01:52:54:00:d6:0d:31}
	I1014 20:17:50.105896  421087 main.go:141] libmachine: (flannel-880673) DBG | domain flannel-880673 has defined IP address 192.168.39.78 and MAC address 52:54:00:d6:0d:31 in network mk-flannel-880673
	I1014 20:17:50.106044  421087 main.go:141] libmachine: (flannel-880673) DBG | Using SSH client type: external
	I1014 20:17:50.106075  421087 main.go:141] libmachine: (flannel-880673) DBG | Using SSH private key: /home/jenkins/minikube-integration/21409-364627/.minikube/machines/flannel-880673/id_rsa (-rw-------)
	I1014 20:17:50.106104  421087 main.go:141] libmachine: (flannel-880673) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.78 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/21409-364627/.minikube/machines/flannel-880673/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1014 20:17:50.106118  421087 main.go:141] libmachine: (flannel-880673) DBG | About to run SSH command:
	I1014 20:17:50.106130  421087 main.go:141] libmachine: (flannel-880673) DBG | exit 0
	I1014 20:17:50.238769  421087 main.go:141] libmachine: (flannel-880673) DBG | SSH cmd err, output: <nil>: 
	I1014 20:17:50.239127  421087 main.go:141] libmachine: (flannel-880673) domain creation complete
	I1014 20:17:50.239637  421087 main.go:141] libmachine: (flannel-880673) Calling .GetConfigRaw
	I1014 20:17:50.240432  421087 main.go:141] libmachine: (flannel-880673) Calling .DriverName
	I1014 20:17:50.240681  421087 main.go:141] libmachine: (flannel-880673) Calling .DriverName
	I1014 20:17:50.240878  421087 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1014 20:17:50.240893  421087 main.go:141] libmachine: (flannel-880673) Calling .GetState
	I1014 20:17:50.242891  421087 main.go:141] libmachine: Detecting operating system of created instance...
	I1014 20:17:50.242908  421087 main.go:141] libmachine: Waiting for SSH to be available...
	I1014 20:17:50.242918  421087 main.go:141] libmachine: Getting to WaitForSSH function...
	I1014 20:17:50.242927  421087 main.go:141] libmachine: (flannel-880673) Calling .GetSSHHostname
	I1014 20:17:50.246273  421087 main.go:141] libmachine: (flannel-880673) DBG | domain flannel-880673 has defined MAC address 52:54:00:d6:0d:31 in network mk-flannel-880673
	I1014 20:17:50.246749  421087 main.go:141] libmachine: (flannel-880673) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:0d:31", ip: ""} in network mk-flannel-880673: {Iface:virbr2 ExpiryTime:2025-10-14 21:17:46 +0000 UTC Type:0 Mac:52:54:00:d6:0d:31 Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:flannel-880673 Clientid:01:52:54:00:d6:0d:31}
	I1014 20:17:50.246772  421087 main.go:141] libmachine: (flannel-880673) DBG | domain flannel-880673 has defined IP address 192.168.39.78 and MAC address 52:54:00:d6:0d:31 in network mk-flannel-880673
	I1014 20:17:50.246939  421087 main.go:141] libmachine: (flannel-880673) Calling .GetSSHPort
	I1014 20:17:50.247138  421087 main.go:141] libmachine: (flannel-880673) Calling .GetSSHKeyPath
	I1014 20:17:50.247284  421087 main.go:141] libmachine: (flannel-880673) Calling .GetSSHKeyPath
	I1014 20:17:50.247443  421087 main.go:141] libmachine: (flannel-880673) Calling .GetSSHUsername
	I1014 20:17:50.247618  421087 main.go:141] libmachine: Using SSH client type: native
	I1014 20:17:50.247940  421087 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.78 22 <nil> <nil>}
	I1014 20:17:50.247959  421087 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1014 20:17:50.357163  421087 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1014 20:17:50.357194  421087 main.go:141] libmachine: Detecting the provisioner...
	I1014 20:17:50.357204  421087 main.go:141] libmachine: (flannel-880673) Calling .GetSSHHostname
	I1014 20:17:50.360955  421087 main.go:141] libmachine: (flannel-880673) DBG | domain flannel-880673 has defined MAC address 52:54:00:d6:0d:31 in network mk-flannel-880673
	I1014 20:17:50.361450  421087 main.go:141] libmachine: (flannel-880673) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:0d:31", ip: ""} in network mk-flannel-880673: {Iface:virbr2 ExpiryTime:2025-10-14 21:17:46 +0000 UTC Type:0 Mac:52:54:00:d6:0d:31 Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:flannel-880673 Clientid:01:52:54:00:d6:0d:31}
	I1014 20:17:50.361521  421087 main.go:141] libmachine: (flannel-880673) DBG | domain flannel-880673 has defined IP address 192.168.39.78 and MAC address 52:54:00:d6:0d:31 in network mk-flannel-880673
	I1014 20:17:50.361680  421087 main.go:141] libmachine: (flannel-880673) Calling .GetSSHPort
	I1014 20:17:50.361903  421087 main.go:141] libmachine: (flannel-880673) Calling .GetSSHKeyPath
	I1014 20:17:50.362061  421087 main.go:141] libmachine: (flannel-880673) Calling .GetSSHKeyPath
	I1014 20:17:50.362240  421087 main.go:141] libmachine: (flannel-880673) Calling .GetSSHUsername
	I1014 20:17:50.362533  421087 main.go:141] libmachine: Using SSH client type: native
	I1014 20:17:50.362848  421087 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.78 22 <nil> <nil>}
	I1014 20:17:50.362864  421087 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1014 20:17:50.470979  421087 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2025.02-dirty
	ID=buildroot
	VERSION_ID=2025.02
	PRETTY_NAME="Buildroot 2025.02"
	
	I1014 20:17:50.471037  421087 main.go:141] libmachine: found compatible host: buildroot
	I1014 20:17:50.471044  421087 main.go:141] libmachine: Provisioning with buildroot...
	I1014 20:17:50.471052  421087 main.go:141] libmachine: (flannel-880673) Calling .GetMachineName
	I1014 20:17:50.471338  421087 buildroot.go:166] provisioning hostname "flannel-880673"
	I1014 20:17:50.471379  421087 main.go:141] libmachine: (flannel-880673) Calling .GetMachineName
	I1014 20:17:50.471617  421087 main.go:141] libmachine: (flannel-880673) Calling .GetSSHHostname
	I1014 20:17:50.474844  421087 main.go:141] libmachine: (flannel-880673) DBG | domain flannel-880673 has defined MAC address 52:54:00:d6:0d:31 in network mk-flannel-880673
	I1014 20:17:50.475290  421087 main.go:141] libmachine: (flannel-880673) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:0d:31", ip: ""} in network mk-flannel-880673: {Iface:virbr2 ExpiryTime:2025-10-14 21:17:46 +0000 UTC Type:0 Mac:52:54:00:d6:0d:31 Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:flannel-880673 Clientid:01:52:54:00:d6:0d:31}
	I1014 20:17:50.475334  421087 main.go:141] libmachine: (flannel-880673) DBG | domain flannel-880673 has defined IP address 192.168.39.78 and MAC address 52:54:00:d6:0d:31 in network mk-flannel-880673
	I1014 20:17:50.475473  421087 main.go:141] libmachine: (flannel-880673) Calling .GetSSHPort
	I1014 20:17:50.475684  421087 main.go:141] libmachine: (flannel-880673) Calling .GetSSHKeyPath
	I1014 20:17:50.475858  421087 main.go:141] libmachine: (flannel-880673) Calling .GetSSHKeyPath
	I1014 20:17:50.476027  421087 main.go:141] libmachine: (flannel-880673) Calling .GetSSHUsername
	I1014 20:17:50.476233  421087 main.go:141] libmachine: Using SSH client type: native
	I1014 20:17:50.476512  421087 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.78 22 <nil> <nil>}
	I1014 20:17:50.476530  421087 main.go:141] libmachine: About to run SSH command:
	sudo hostname flannel-880673 && echo "flannel-880673" | sudo tee /etc/hostname
	I1014 20:17:50.597679  421087 main.go:141] libmachine: SSH cmd err, output: <nil>: flannel-880673
	
	I1014 20:17:50.597732  421087 main.go:141] libmachine: (flannel-880673) Calling .GetSSHHostname
	I1014 20:17:50.601501  421087 main.go:141] libmachine: (flannel-880673) DBG | domain flannel-880673 has defined MAC address 52:54:00:d6:0d:31 in network mk-flannel-880673
	I1014 20:17:50.601966  421087 main.go:141] libmachine: (flannel-880673) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:0d:31", ip: ""} in network mk-flannel-880673: {Iface:virbr2 ExpiryTime:2025-10-14 21:17:46 +0000 UTC Type:0 Mac:52:54:00:d6:0d:31 Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:flannel-880673 Clientid:01:52:54:00:d6:0d:31}
	I1014 20:17:50.601994  421087 main.go:141] libmachine: (flannel-880673) DBG | domain flannel-880673 has defined IP address 192.168.39.78 and MAC address 52:54:00:d6:0d:31 in network mk-flannel-880673
	I1014 20:17:50.602373  421087 main.go:141] libmachine: (flannel-880673) Calling .GetSSHPort
	I1014 20:17:50.602616  421087 main.go:141] libmachine: (flannel-880673) Calling .GetSSHKeyPath
	I1014 20:17:50.602849  421087 main.go:141] libmachine: (flannel-880673) Calling .GetSSHKeyPath
	I1014 20:17:50.603026  421087 main.go:141] libmachine: (flannel-880673) Calling .GetSSHUsername
	I1014 20:17:50.603233  421087 main.go:141] libmachine: Using SSH client type: native
	I1014 20:17:50.603517  421087 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.78 22 <nil> <nil>}
	I1014 20:17:50.603552  421087 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sflannel-880673' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 flannel-880673/g' /etc/hosts;
				else 
					echo '127.0.1.1 flannel-880673' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1014 20:17:50.720937  421087 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1014 20:17:50.720967  421087 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21409-364627/.minikube CaCertPath:/home/jenkins/minikube-integration/21409-364627/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21409-364627/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21409-364627/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21409-364627/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21409-364627/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21409-364627/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21409-364627/.minikube}
	I1014 20:17:50.721002  421087 buildroot.go:174] setting up certificates
	I1014 20:17:50.721015  421087 provision.go:84] configureAuth start
	I1014 20:17:50.721029  421087 main.go:141] libmachine: (flannel-880673) Calling .GetMachineName
	I1014 20:17:50.721462  421087 main.go:141] libmachine: (flannel-880673) Calling .GetIP
	I1014 20:17:50.724906  421087 main.go:141] libmachine: (flannel-880673) DBG | domain flannel-880673 has defined MAC address 52:54:00:d6:0d:31 in network mk-flannel-880673
	I1014 20:17:50.725295  421087 main.go:141] libmachine: (flannel-880673) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:0d:31", ip: ""} in network mk-flannel-880673: {Iface:virbr2 ExpiryTime:2025-10-14 21:17:46 +0000 UTC Type:0 Mac:52:54:00:d6:0d:31 Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:flannel-880673 Clientid:01:52:54:00:d6:0d:31}
	I1014 20:17:50.725354  421087 main.go:141] libmachine: (flannel-880673) DBG | domain flannel-880673 has defined IP address 192.168.39.78 and MAC address 52:54:00:d6:0d:31 in network mk-flannel-880673
	I1014 20:17:50.725547  421087 main.go:141] libmachine: (flannel-880673) Calling .GetSSHHostname
	I1014 20:17:50.728177  421087 main.go:141] libmachine: (flannel-880673) DBG | domain flannel-880673 has defined MAC address 52:54:00:d6:0d:31 in network mk-flannel-880673
	I1014 20:17:50.728630  421087 main.go:141] libmachine: (flannel-880673) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:0d:31", ip: ""} in network mk-flannel-880673: {Iface:virbr2 ExpiryTime:2025-10-14 21:17:46 +0000 UTC Type:0 Mac:52:54:00:d6:0d:31 Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:flannel-880673 Clientid:01:52:54:00:d6:0d:31}
	I1014 20:17:50.728660  421087 main.go:141] libmachine: (flannel-880673) DBG | domain flannel-880673 has defined IP address 192.168.39.78 and MAC address 52:54:00:d6:0d:31 in network mk-flannel-880673
	I1014 20:17:50.728837  421087 provision.go:143] copyHostCerts
	I1014 20:17:50.728911  421087 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-364627/.minikube/key.pem, removing ...
	I1014 20:17:50.728932  421087 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-364627/.minikube/key.pem
	I1014 20:17:50.729026  421087 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-364627/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21409-364627/.minikube/key.pem (1675 bytes)
	I1014 20:17:50.729171  421087 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-364627/.minikube/ca.pem, removing ...
	I1014 20:17:50.729183  421087 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-364627/.minikube/ca.pem
	I1014 20:17:50.729225  421087 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-364627/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21409-364627/.minikube/ca.pem (1082 bytes)
	I1014 20:17:50.729343  421087 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-364627/.minikube/cert.pem, removing ...
	I1014 20:17:50.729357  421087 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-364627/.minikube/cert.pem
	I1014 20:17:50.729409  421087 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-364627/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21409-364627/.minikube/cert.pem (1123 bytes)
	I1014 20:17:50.729511  421087 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21409-364627/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21409-364627/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21409-364627/.minikube/certs/ca-key.pem org=jenkins.flannel-880673 san=[127.0.0.1 192.168.39.78 flannel-880673 localhost minikube]
	I1014 20:17:50.937434  421087 provision.go:177] copyRemoteCerts
	I1014 20:17:50.937529  421087 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1014 20:17:50.937567  421087 main.go:141] libmachine: (flannel-880673) Calling .GetSSHHostname
	I1014 20:17:50.940661  421087 main.go:141] libmachine: (flannel-880673) DBG | domain flannel-880673 has defined MAC address 52:54:00:d6:0d:31 in network mk-flannel-880673
	I1014 20:17:50.941077  421087 main.go:141] libmachine: (flannel-880673) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:0d:31", ip: ""} in network mk-flannel-880673: {Iface:virbr2 ExpiryTime:2025-10-14 21:17:46 +0000 UTC Type:0 Mac:52:54:00:d6:0d:31 Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:flannel-880673 Clientid:01:52:54:00:d6:0d:31}
	I1014 20:17:50.941106  421087 main.go:141] libmachine: (flannel-880673) DBG | domain flannel-880673 has defined IP address 192.168.39.78 and MAC address 52:54:00:d6:0d:31 in network mk-flannel-880673
	I1014 20:17:50.941293  421087 main.go:141] libmachine: (flannel-880673) Calling .GetSSHPort
	I1014 20:17:50.941546  421087 main.go:141] libmachine: (flannel-880673) Calling .GetSSHKeyPath
	I1014 20:17:50.941735  421087 main.go:141] libmachine: (flannel-880673) Calling .GetSSHUsername
	I1014 20:17:50.941947  421087 sshutil.go:53] new ssh client: &{IP:192.168.39.78 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21409-364627/.minikube/machines/flannel-880673/id_rsa Username:docker}
	I1014 20:17:51.027245  421087 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-364627/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1014 20:17:51.057431  421087 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-364627/.minikube/machines/server.pem --> /etc/docker/server.pem (1212 bytes)
	I1014 20:17:51.087620  421087 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-364627/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1014 20:17:51.117032  421087 provision.go:87] duration metric: took 395.999388ms to configureAuth
	I1014 20:17:51.117078  421087 buildroot.go:189] setting minikube options for container-runtime
	I1014 20:17:51.117230  421087 config.go:182] Loaded profile config "flannel-880673": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1014 20:17:51.117349  421087 main.go:141] libmachine: (flannel-880673) Calling .GetSSHHostname
	I1014 20:17:51.120410  421087 main.go:141] libmachine: (flannel-880673) DBG | domain flannel-880673 has defined MAC address 52:54:00:d6:0d:31 in network mk-flannel-880673
	I1014 20:17:51.120743  421087 main.go:141] libmachine: (flannel-880673) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:0d:31", ip: ""} in network mk-flannel-880673: {Iface:virbr2 ExpiryTime:2025-10-14 21:17:46 +0000 UTC Type:0 Mac:52:54:00:d6:0d:31 Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:flannel-880673 Clientid:01:52:54:00:d6:0d:31}
	I1014 20:17:51.120768  421087 main.go:141] libmachine: (flannel-880673) DBG | domain flannel-880673 has defined IP address 192.168.39.78 and MAC address 52:54:00:d6:0d:31 in network mk-flannel-880673
	I1014 20:17:51.120992  421087 main.go:141] libmachine: (flannel-880673) Calling .GetSSHPort
	I1014 20:17:51.121252  421087 main.go:141] libmachine: (flannel-880673) Calling .GetSSHKeyPath
	I1014 20:17:51.121462  421087 main.go:141] libmachine: (flannel-880673) Calling .GetSSHKeyPath
	I1014 20:17:51.121640  421087 main.go:141] libmachine: (flannel-880673) Calling .GetSSHUsername
	I1014 20:17:51.121892  421087 main.go:141] libmachine: Using SSH client type: native
	I1014 20:17:51.122177  421087 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.78 22 <nil> <nil>}
	I1014 20:17:51.122203  421087 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1014 20:17:51.359141  421087 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1014 20:17:51.359176  421087 main.go:141] libmachine: Checking connection to Docker...
	I1014 20:17:51.359188  421087 main.go:141] libmachine: (flannel-880673) Calling .GetURL
	I1014 20:17:51.360877  421087 main.go:141] libmachine: (flannel-880673) DBG | using libvirt version 8000000
	I1014 20:17:51.363941  421087 main.go:141] libmachine: (flannel-880673) DBG | domain flannel-880673 has defined MAC address 52:54:00:d6:0d:31 in network mk-flannel-880673
	I1014 20:17:51.364391  421087 main.go:141] libmachine: (flannel-880673) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:0d:31", ip: ""} in network mk-flannel-880673: {Iface:virbr2 ExpiryTime:2025-10-14 21:17:46 +0000 UTC Type:0 Mac:52:54:00:d6:0d:31 Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:flannel-880673 Clientid:01:52:54:00:d6:0d:31}
	I1014 20:17:51.364421  421087 main.go:141] libmachine: (flannel-880673) DBG | domain flannel-880673 has defined IP address 192.168.39.78 and MAC address 52:54:00:d6:0d:31 in network mk-flannel-880673
	I1014 20:17:51.364677  421087 main.go:141] libmachine: Docker is up and running!
	I1014 20:17:51.364693  421087 main.go:141] libmachine: Reticulating splines...
	I1014 20:17:51.364702  421087 client.go:171] duration metric: took 21.333870837s to LocalClient.Create
	I1014 20:17:51.364755  421087 start.go:167] duration metric: took 21.333952273s to libmachine.API.Create "flannel-880673"
	I1014 20:17:51.364772  421087 start.go:293] postStartSetup for "flannel-880673" (driver="kvm2")
	I1014 20:17:51.364785  421087 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1014 20:17:51.364811  421087 main.go:141] libmachine: (flannel-880673) Calling .DriverName
	I1014 20:17:51.365093  421087 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1014 20:17:51.365122  421087 main.go:141] libmachine: (flannel-880673) Calling .GetSSHHostname
	I1014 20:17:51.368038  421087 main.go:141] libmachine: (flannel-880673) DBG | domain flannel-880673 has defined MAC address 52:54:00:d6:0d:31 in network mk-flannel-880673
	I1014 20:17:51.368451  421087 main.go:141] libmachine: (flannel-880673) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:0d:31", ip: ""} in network mk-flannel-880673: {Iface:virbr2 ExpiryTime:2025-10-14 21:17:46 +0000 UTC Type:0 Mac:52:54:00:d6:0d:31 Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:flannel-880673 Clientid:01:52:54:00:d6:0d:31}
	I1014 20:17:51.368482  421087 main.go:141] libmachine: (flannel-880673) DBG | domain flannel-880673 has defined IP address 192.168.39.78 and MAC address 52:54:00:d6:0d:31 in network mk-flannel-880673
	I1014 20:17:51.368691  421087 main.go:141] libmachine: (flannel-880673) Calling .GetSSHPort
	I1014 20:17:51.368870  421087 main.go:141] libmachine: (flannel-880673) Calling .GetSSHKeyPath
	I1014 20:17:51.369049  421087 main.go:141] libmachine: (flannel-880673) Calling .GetSSHUsername
	I1014 20:17:51.369172  421087 sshutil.go:53] new ssh client: &{IP:192.168.39.78 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21409-364627/.minikube/machines/flannel-880673/id_rsa Username:docker}
	I1014 20:17:51.454055  421087 ssh_runner.go:195] Run: cat /etc/os-release
	I1014 20:17:51.459382  421087 info.go:137] Remote host: Buildroot 2025.02
	I1014 20:17:51.459411  421087 filesync.go:126] Scanning /home/jenkins/minikube-integration/21409-364627/.minikube/addons for local assets ...
	I1014 20:17:51.459480  421087 filesync.go:126] Scanning /home/jenkins/minikube-integration/21409-364627/.minikube/files for local assets ...
	I1014 20:17:51.459555  421087 filesync.go:149] local asset: /home/jenkins/minikube-integration/21409-364627/.minikube/files/etc/ssl/certs/3686342.pem -> 3686342.pem in /etc/ssl/certs
	I1014 20:17:51.459644  421087 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1014 20:17:51.471818  421087 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-364627/.minikube/files/etc/ssl/certs/3686342.pem --> /etc/ssl/certs/3686342.pem (1708 bytes)
	I1014 20:17:51.500829  421087 start.go:296] duration metric: took 136.037282ms for postStartSetup
	I1014 20:17:51.500899  421087 main.go:141] libmachine: (flannel-880673) Calling .GetConfigRaw
	I1014 20:17:51.501695  421087 main.go:141] libmachine: (flannel-880673) Calling .GetIP
	I1014 20:17:51.504654  421087 main.go:141] libmachine: (flannel-880673) DBG | domain flannel-880673 has defined MAC address 52:54:00:d6:0d:31 in network mk-flannel-880673
	I1014 20:17:51.505104  421087 main.go:141] libmachine: (flannel-880673) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:0d:31", ip: ""} in network mk-flannel-880673: {Iface:virbr2 ExpiryTime:2025-10-14 21:17:46 +0000 UTC Type:0 Mac:52:54:00:d6:0d:31 Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:flannel-880673 Clientid:01:52:54:00:d6:0d:31}
	I1014 20:17:51.505134  421087 main.go:141] libmachine: (flannel-880673) DBG | domain flannel-880673 has defined IP address 192.168.39.78 and MAC address 52:54:00:d6:0d:31 in network mk-flannel-880673
	I1014 20:17:51.505480  421087 profile.go:143] Saving config to /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/flannel-880673/config.json ...
	I1014 20:17:51.505690  421087 start.go:128] duration metric: took 21.496576305s to createHost
	I1014 20:17:51.505714  421087 main.go:141] libmachine: (flannel-880673) Calling .GetSSHHostname
	I1014 20:17:51.508879  421087 main.go:141] libmachine: (flannel-880673) DBG | domain flannel-880673 has defined MAC address 52:54:00:d6:0d:31 in network mk-flannel-880673
	I1014 20:17:51.509305  421087 main.go:141] libmachine: (flannel-880673) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:0d:31", ip: ""} in network mk-flannel-880673: {Iface:virbr2 ExpiryTime:2025-10-14 21:17:46 +0000 UTC Type:0 Mac:52:54:00:d6:0d:31 Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:flannel-880673 Clientid:01:52:54:00:d6:0d:31}
	I1014 20:17:51.509350  421087 main.go:141] libmachine: (flannel-880673) DBG | domain flannel-880673 has defined IP address 192.168.39.78 and MAC address 52:54:00:d6:0d:31 in network mk-flannel-880673
	I1014 20:17:51.509541  421087 main.go:141] libmachine: (flannel-880673) Calling .GetSSHPort
	I1014 20:17:51.509750  421087 main.go:141] libmachine: (flannel-880673) Calling .GetSSHKeyPath
	I1014 20:17:51.510035  421087 main.go:141] libmachine: (flannel-880673) Calling .GetSSHKeyPath
	I1014 20:17:51.510221  421087 main.go:141] libmachine: (flannel-880673) Calling .GetSSHUsername
	I1014 20:17:51.510420  421087 main.go:141] libmachine: Using SSH client type: native
	I1014 20:17:51.510686  421087 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.78 22 <nil> <nil>}
	I1014 20:17:51.510702  421087 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1014 20:17:51.614004  421087 main.go:141] libmachine: SSH cmd err, output: <nil>: 1760473071.583554781
	
	I1014 20:17:51.614069  421087 fix.go:216] guest clock: 1760473071.583554781
	I1014 20:17:51.614085  421087 fix.go:229] Guest: 2025-10-14 20:17:51.583554781 +0000 UTC Remote: 2025-10-14 20:17:51.50570252 +0000 UTC m=+21.644925851 (delta=77.852261ms)
	I1014 20:17:51.614130  421087 fix.go:200] guest clock delta is within tolerance: 77.852261ms
	I1014 20:17:51.614141  421087 start.go:83] releasing machines lock for "flannel-880673", held for 21.605088741s
	I1014 20:17:51.614185  421087 main.go:141] libmachine: (flannel-880673) Calling .DriverName
	I1014 20:17:51.614505  421087 main.go:141] libmachine: (flannel-880673) Calling .GetIP
	I1014 20:17:51.617635  421087 main.go:141] libmachine: (flannel-880673) DBG | domain flannel-880673 has defined MAC address 52:54:00:d6:0d:31 in network mk-flannel-880673
	I1014 20:17:51.618186  421087 main.go:141] libmachine: (flannel-880673) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:0d:31", ip: ""} in network mk-flannel-880673: {Iface:virbr2 ExpiryTime:2025-10-14 21:17:46 +0000 UTC Type:0 Mac:52:54:00:d6:0d:31 Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:flannel-880673 Clientid:01:52:54:00:d6:0d:31}
	I1014 20:17:51.618235  421087 main.go:141] libmachine: (flannel-880673) DBG | domain flannel-880673 has defined IP address 192.168.39.78 and MAC address 52:54:00:d6:0d:31 in network mk-flannel-880673
	I1014 20:17:51.618444  421087 main.go:141] libmachine: (flannel-880673) Calling .DriverName
	I1014 20:17:51.619001  421087 main.go:141] libmachine: (flannel-880673) Calling .DriverName
	I1014 20:17:51.619200  421087 main.go:141] libmachine: (flannel-880673) Calling .DriverName
	I1014 20:17:51.619355  421087 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1014 20:17:51.619418  421087 main.go:141] libmachine: (flannel-880673) Calling .GetSSHHostname
	I1014 20:17:51.619424  421087 ssh_runner.go:195] Run: cat /version.json
	I1014 20:17:51.619439  421087 main.go:141] libmachine: (flannel-880673) Calling .GetSSHHostname
	I1014 20:17:51.622937  421087 main.go:141] libmachine: (flannel-880673) DBG | domain flannel-880673 has defined MAC address 52:54:00:d6:0d:31 in network mk-flannel-880673
	I1014 20:17:51.622971  421087 main.go:141] libmachine: (flannel-880673) DBG | domain flannel-880673 has defined MAC address 52:54:00:d6:0d:31 in network mk-flannel-880673
	I1014 20:17:51.623493  421087 main.go:141] libmachine: (flannel-880673) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:0d:31", ip: ""} in network mk-flannel-880673: {Iface:virbr2 ExpiryTime:2025-10-14 21:17:46 +0000 UTC Type:0 Mac:52:54:00:d6:0d:31 Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:flannel-880673 Clientid:01:52:54:00:d6:0d:31}
	I1014 20:17:51.623550  421087 main.go:141] libmachine: (flannel-880673) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:0d:31", ip: ""} in network mk-flannel-880673: {Iface:virbr2 ExpiryTime:2025-10-14 21:17:46 +0000 UTC Type:0 Mac:52:54:00:d6:0d:31 Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:flannel-880673 Clientid:01:52:54:00:d6:0d:31}
	I1014 20:17:51.623586  421087 main.go:141] libmachine: (flannel-880673) DBG | domain flannel-880673 has defined IP address 192.168.39.78 and MAC address 52:54:00:d6:0d:31 in network mk-flannel-880673
	I1014 20:17:51.623603  421087 main.go:141] libmachine: (flannel-880673) DBG | domain flannel-880673 has defined IP address 192.168.39.78 and MAC address 52:54:00:d6:0d:31 in network mk-flannel-880673
	I1014 20:17:51.623798  421087 main.go:141] libmachine: (flannel-880673) Calling .GetSSHPort
	I1014 20:17:51.623910  421087 main.go:141] libmachine: (flannel-880673) Calling .GetSSHPort
	I1014 20:17:51.624021  421087 main.go:141] libmachine: (flannel-880673) Calling .GetSSHKeyPath
	I1014 20:17:51.624023  421087 main.go:141] libmachine: (flannel-880673) Calling .GetSSHKeyPath
	I1014 20:17:51.624244  421087 main.go:141] libmachine: (flannel-880673) Calling .GetSSHUsername
	I1014 20:17:51.624328  421087 main.go:141] libmachine: (flannel-880673) Calling .GetSSHUsername
	I1014 20:17:51.624403  421087 sshutil.go:53] new ssh client: &{IP:192.168.39.78 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21409-364627/.minikube/machines/flannel-880673/id_rsa Username:docker}
	I1014 20:17:51.624522  421087 sshutil.go:53] new ssh client: &{IP:192.168.39.78 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21409-364627/.minikube/machines/flannel-880673/id_rsa Username:docker}
	I1014 20:17:51.706447  421087 ssh_runner.go:195] Run: systemctl --version
	I1014 20:17:51.742651  421087 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1014 20:17:51.906255  421087 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1014 20:17:51.913515  421087 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1014 20:17:51.913622  421087 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1014 20:17:51.933476  421087 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1014 20:17:51.933501  421087 start.go:495] detecting cgroup driver to use...
	I1014 20:17:51.933557  421087 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1014 20:17:51.953531  421087 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1014 20:17:51.971263  421087 docker.go:218] disabling cri-docker service (if available) ...
	I1014 20:17:51.971349  421087 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1014 20:17:51.989240  421087 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1014 20:17:52.006381  421087 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1014 20:17:52.164206  421087 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1014 20:17:52.395429  421087 docker.go:234] disabling docker service ...
	I1014 20:17:52.395502  421087 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1014 20:17:52.418523  421087 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1014 20:17:52.434793  421087 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1014 20:17:52.591989  421087 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1014 20:17:52.743031  421087 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1014 20:17:52.761625  421087 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1014 20:17:52.788904  421087 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1014 20:17:52.788959  421087 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 20:17:52.803433  421087 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1014 20:17:52.803500  421087 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 20:17:52.819575  421087 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 20:17:52.834511  421087 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 20:17:52.848951  421087 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1014 20:17:52.862556  421087 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 20:17:52.874877  421087 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 20:17:52.895550  421087 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 20:17:52.909411  421087 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1014 20:17:52.920167  421087 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1014 20:17:52.920235  421087 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1014 20:17:52.940776  421087 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1014 20:17:52.956779  421087 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 20:17:53.103029  421087 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1014 20:17:53.226612  421087 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1014 20:17:53.226724  421087 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1014 20:17:53.234132  421087 start.go:563] Will wait 60s for crictl version
	I1014 20:17:53.234203  421087 ssh_runner.go:195] Run: which crictl
	I1014 20:17:53.239069  421087 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1014 20:17:53.287126  421087 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1014 20:17:53.287244  421087 ssh_runner.go:195] Run: crio --version
	I1014 20:17:53.327479  421087 ssh_runner.go:195] Run: crio --version
	I1014 20:17:53.363073  421087 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.29.1 ...
	I1014 20:17:53.364248  421087 main.go:141] libmachine: (flannel-880673) Calling .GetIP
	I1014 20:17:53.368097  421087 main.go:141] libmachine: (flannel-880673) DBG | domain flannel-880673 has defined MAC address 52:54:00:d6:0d:31 in network mk-flannel-880673
	I1014 20:17:53.368664  421087 main.go:141] libmachine: (flannel-880673) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:0d:31", ip: ""} in network mk-flannel-880673: {Iface:virbr2 ExpiryTime:2025-10-14 21:17:46 +0000 UTC Type:0 Mac:52:54:00:d6:0d:31 Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:flannel-880673 Clientid:01:52:54:00:d6:0d:31}
	I1014 20:17:53.368691  421087 main.go:141] libmachine: (flannel-880673) DBG | domain flannel-880673 has defined IP address 192.168.39.78 and MAC address 52:54:00:d6:0d:31 in network mk-flannel-880673
	I1014 20:17:53.369001  421087 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1014 20:17:53.373983  421087 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1014 20:17:53.389567  421087 kubeadm.go:883] updating cluster {Name:flannel-880673 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34
.1 ClusterName:flannel-880673 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP:192.168.39.78 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMi
rror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1014 20:17:53.389708  421087 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1014 20:17:53.389768  421087 ssh_runner.go:195] Run: sudo crictl images --output json
	I1014 20:17:53.430110  421087 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.34.1". assuming images are not preloaded.
	I1014 20:17:53.430215  421087 ssh_runner.go:195] Run: which lz4
	I1014 20:17:53.436993  421087 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1014 20:17:53.442434  421087 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1014 20:17:53.442474  421087 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-364627/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (409477533 bytes)
	W1014 20:17:52.137421  418230 pod_ready.go:104] pod "coredns-66bc5c9577-489jr" is not "Ready", error: <nil>
	W1014 20:17:54.137620  418230 pod_ready.go:104] pod "coredns-66bc5c9577-489jr" is not "Ready", error: <nil>
	W1014 20:17:56.138484  418230 pod_ready.go:104] pod "coredns-66bc5c9577-489jr" is not "Ready", error: <nil>
	I1014 20:17:53.540027  421402 main.go:141] libmachine: (bridge-880673) waiting for domain to start...
	I1014 20:17:53.541811  421402 main.go:141] libmachine: (bridge-880673) domain is now running
	I1014 20:17:53.541838  421402 main.go:141] libmachine: (bridge-880673) waiting for IP...
	I1014 20:17:53.542767  421402 main.go:141] libmachine: (bridge-880673) DBG | domain bridge-880673 has defined MAC address 52:54:00:21:00:20 in network mk-bridge-880673
	I1014 20:17:53.543379  421402 main.go:141] libmachine: (bridge-880673) DBG | no network interface addresses found for domain bridge-880673 (source=lease)
	I1014 20:17:53.543407  421402 main.go:141] libmachine: (bridge-880673) DBG | trying to list again with source=arp
	I1014 20:17:53.543822  421402 main.go:141] libmachine: (bridge-880673) DBG | unable to find current IP address of domain bridge-880673 in network mk-bridge-880673 (interfaces detected: [])
	I1014 20:17:53.543882  421402 main.go:141] libmachine: (bridge-880673) DBG | I1014 20:17:53.543835  422003 retry.go:31] will retry after 294.647054ms: waiting for domain to come up
	I1014 20:17:53.840886  421402 main.go:141] libmachine: (bridge-880673) DBG | domain bridge-880673 has defined MAC address 52:54:00:21:00:20 in network mk-bridge-880673
	I1014 20:17:53.841778  421402 main.go:141] libmachine: (bridge-880673) DBG | no network interface addresses found for domain bridge-880673 (source=lease)
	I1014 20:17:53.841809  421402 main.go:141] libmachine: (bridge-880673) DBG | trying to list again with source=arp
	I1014 20:17:53.842292  421402 main.go:141] libmachine: (bridge-880673) DBG | unable to find current IP address of domain bridge-880673 in network mk-bridge-880673 (interfaces detected: [])
	I1014 20:17:53.842378  421402 main.go:141] libmachine: (bridge-880673) DBG | I1014 20:17:53.842274  422003 retry.go:31] will retry after 306.249634ms: waiting for domain to come up
	I1014 20:17:54.151233  421402 main.go:141] libmachine: (bridge-880673) DBG | domain bridge-880673 has defined MAC address 52:54:00:21:00:20 in network mk-bridge-880673
	I1014 20:17:54.152165  421402 main.go:141] libmachine: (bridge-880673) DBG | no network interface addresses found for domain bridge-880673 (source=lease)
	I1014 20:17:54.152200  421402 main.go:141] libmachine: (bridge-880673) DBG | trying to list again with source=arp
	I1014 20:17:54.152799  421402 main.go:141] libmachine: (bridge-880673) DBG | unable to find current IP address of domain bridge-880673 in network mk-bridge-880673 (interfaces detected: [])
	I1014 20:17:54.152831  421402 main.go:141] libmachine: (bridge-880673) DBG | I1014 20:17:54.152769  422003 retry.go:31] will retry after 428.212526ms: waiting for domain to come up
	I1014 20:17:54.582621  421402 main.go:141] libmachine: (bridge-880673) DBG | domain bridge-880673 has defined MAC address 52:54:00:21:00:20 in network mk-bridge-880673
	I1014 20:17:54.583447  421402 main.go:141] libmachine: (bridge-880673) DBG | no network interface addresses found for domain bridge-880673 (source=lease)
	I1014 20:17:54.583472  421402 main.go:141] libmachine: (bridge-880673) DBG | trying to list again with source=arp
	I1014 20:17:54.584500  421402 main.go:141] libmachine: (bridge-880673) DBG | unable to find current IP address of domain bridge-880673 in network mk-bridge-880673 (interfaces detected: [])
	I1014 20:17:54.584527  421402 main.go:141] libmachine: (bridge-880673) DBG | I1014 20:17:54.584013  422003 retry.go:31] will retry after 599.389005ms: waiting for domain to come up
	I1014 20:17:55.184701  421402 main.go:141] libmachine: (bridge-880673) DBG | domain bridge-880673 has defined MAC address 52:54:00:21:00:20 in network mk-bridge-880673
	I1014 20:17:55.185409  421402 main.go:141] libmachine: (bridge-880673) DBG | no network interface addresses found for domain bridge-880673 (source=lease)
	I1014 20:17:55.185439  421402 main.go:141] libmachine: (bridge-880673) DBG | trying to list again with source=arp
	I1014 20:17:55.185832  421402 main.go:141] libmachine: (bridge-880673) DBG | unable to find current IP address of domain bridge-880673 in network mk-bridge-880673 (interfaces detected: [])
	I1014 20:17:55.185881  421402 main.go:141] libmachine: (bridge-880673) DBG | I1014 20:17:55.185838  422003 retry.go:31] will retry after 651.000197ms: waiting for domain to come up
	I1014 20:17:55.838912  421402 main.go:141] libmachine: (bridge-880673) DBG | domain bridge-880673 has defined MAC address 52:54:00:21:00:20 in network mk-bridge-880673
	I1014 20:17:55.839716  421402 main.go:141] libmachine: (bridge-880673) DBG | no network interface addresses found for domain bridge-880673 (source=lease)
	I1014 20:17:55.839748  421402 main.go:141] libmachine: (bridge-880673) DBG | trying to list again with source=arp
	I1014 20:17:55.840211  421402 main.go:141] libmachine: (bridge-880673) DBG | unable to find current IP address of domain bridge-880673 in network mk-bridge-880673 (interfaces detected: [])
	I1014 20:17:55.840245  421402 main.go:141] libmachine: (bridge-880673) DBG | I1014 20:17:55.840177  422003 retry.go:31] will retry after 630.744356ms: waiting for domain to come up
	I1014 20:17:56.473326  421402 main.go:141] libmachine: (bridge-880673) DBG | domain bridge-880673 has defined MAC address 52:54:00:21:00:20 in network mk-bridge-880673
	I1014 20:17:56.474156  421402 main.go:141] libmachine: (bridge-880673) DBG | no network interface addresses found for domain bridge-880673 (source=lease)
	I1014 20:17:56.474185  421402 main.go:141] libmachine: (bridge-880673) DBG | trying to list again with source=arp
	I1014 20:17:56.474592  421402 main.go:141] libmachine: (bridge-880673) DBG | unable to find current IP address of domain bridge-880673 in network mk-bridge-880673 (interfaces detected: [])
	I1014 20:17:56.474662  421402 main.go:141] libmachine: (bridge-880673) DBG | I1014 20:17:56.474596  422003 retry.go:31] will retry after 941.351033ms: waiting for domain to come up
	I1014 20:17:57.417345  421402 main.go:141] libmachine: (bridge-880673) DBG | domain bridge-880673 has defined MAC address 52:54:00:21:00:20 in network mk-bridge-880673
	I1014 20:17:57.417934  421402 main.go:141] libmachine: (bridge-880673) DBG | no network interface addresses found for domain bridge-880673 (source=lease)
	I1014 20:17:57.417959  421402 main.go:141] libmachine: (bridge-880673) DBG | trying to list again with source=arp
	I1014 20:17:57.418386  421402 main.go:141] libmachine: (bridge-880673) DBG | unable to find current IP address of domain bridge-880673 in network mk-bridge-880673 (interfaces detected: [])
	I1014 20:17:57.418446  421402 main.go:141] libmachine: (bridge-880673) DBG | I1014 20:17:57.418390  422003 retry.go:31] will retry after 1.156861705s: waiting for domain to come up
	I1014 20:17:55.030710  421087 crio.go:462] duration metric: took 1.593761668s to copy over tarball
	I1014 20:17:55.030789  421087 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1014 20:17:56.814579  421087 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.783758686s)
	I1014 20:17:56.814609  421087 crio.go:469] duration metric: took 1.783864241s to extract the tarball
	I1014 20:17:56.814618  421087 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1014 20:17:56.868000  421087 ssh_runner.go:195] Run: sudo crictl images --output json
	I1014 20:17:56.915902  421087 crio.go:514] all images are preloaded for cri-o runtime.
	I1014 20:17:56.915931  421087 cache_images.go:85] Images are preloaded, skipping loading
	I1014 20:17:56.915940  421087 kubeadm.go:934] updating node { 192.168.39.78 8443 v1.34.1 crio true true} ...
	I1014 20:17:56.916066  421087 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=flannel-880673 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.78
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:flannel-880673 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel}
	I1014 20:17:56.916158  421087 ssh_runner.go:195] Run: crio config
	I1014 20:17:56.962683  421087 cni.go:84] Creating CNI manager for "flannel"
	I1014 20:17:56.962717  421087 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1014 20:17:56.962737  421087 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.78 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:flannel-880673 NodeName:flannel-880673 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.78"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.78 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/ku
bernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1014 20:17:56.962922  421087 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.78
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "flannel-880673"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.78"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.78"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1014 20:17:56.963011  421087 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1014 20:17:56.977326  421087 binaries.go:44] Found k8s binaries, skipping transfer
	I1014 20:17:56.977413  421087 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1014 20:17:56.990111  421087 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I1014 20:17:57.014665  421087 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1014 20:17:57.036328  421087 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2214 bytes)
	I1014 20:17:57.060836  421087 ssh_runner.go:195] Run: grep 192.168.39.78	control-plane.minikube.internal$ /etc/hosts
	I1014 20:17:57.065142  421087 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.78	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1014 20:17:57.080533  421087 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 20:17:57.237432  421087 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1014 20:17:57.269708  421087 certs.go:69] Setting up /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/flannel-880673 for IP: 192.168.39.78
	I1014 20:17:57.269738  421087 certs.go:195] generating shared ca certs ...
	I1014 20:17:57.269760  421087 certs.go:227] acquiring lock for ca certs: {Name:mkddeaa8fb7f14aff32554669329c3967650976a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 20:17:57.269989  421087 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21409-364627/.minikube/ca.key
	I1014 20:17:57.270059  421087 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21409-364627/.minikube/proxy-client-ca.key
	I1014 20:17:57.270074  421087 certs.go:257] generating profile certs ...
	I1014 20:17:57.270172  421087 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/flannel-880673/client.key
	I1014 20:17:57.270204  421087 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/flannel-880673/client.crt with IP's: []
	I1014 20:17:57.590880  421087 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/flannel-880673/client.crt ...
	I1014 20:17:57.590941  421087 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/flannel-880673/client.crt: {Name:mkf367293cc65dfacac82f8386e6aa77348cb48e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 20:17:57.591193  421087 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/flannel-880673/client.key ...
	I1014 20:17:57.591214  421087 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/flannel-880673/client.key: {Name:mk32041be4750a3b1dd0573fa6125b7f9b29b38d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 20:17:57.591362  421087 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/flannel-880673/apiserver.key.50d6be17
	I1014 20:17:57.591389  421087 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/flannel-880673/apiserver.crt.50d6be17 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.78]
	I1014 20:17:57.958440  421087 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/flannel-880673/apiserver.crt.50d6be17 ...
	I1014 20:17:57.958473  421087 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/flannel-880673/apiserver.crt.50d6be17: {Name:mk7a0d0e7468fc1ecb2d15a21f1efedfb729160a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 20:17:57.958647  421087 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/flannel-880673/apiserver.key.50d6be17 ...
	I1014 20:17:57.958662  421087 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/flannel-880673/apiserver.key.50d6be17: {Name:mkcd9e714b6414537d716937c3c1e66a152dc681 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 20:17:57.958737  421087 certs.go:382] copying /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/flannel-880673/apiserver.crt.50d6be17 -> /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/flannel-880673/apiserver.crt
	I1014 20:17:57.958836  421087 certs.go:386] copying /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/flannel-880673/apiserver.key.50d6be17 -> /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/flannel-880673/apiserver.key
	I1014 20:17:57.958914  421087 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/flannel-880673/proxy-client.key
	I1014 20:17:57.958934  421087 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/flannel-880673/proxy-client.crt with IP's: []
	I1014 20:17:58.348483  421087 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/flannel-880673/proxy-client.crt ...
	I1014 20:17:58.348517  421087 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/flannel-880673/proxy-client.crt: {Name:mk768d87c2d8e36cd6890fe09ebcb78d216d69e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 20:17:58.348732  421087 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/flannel-880673/proxy-client.key ...
	I1014 20:17:58.348762  421087 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/flannel-880673/proxy-client.key: {Name:mk2722cff97c505742c3f319a68d318bbcbed2e3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 20:17:58.348993  421087 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-364627/.minikube/certs/368634.pem (1338 bytes)
	W1014 20:17:58.349046  421087 certs.go:480] ignoring /home/jenkins/minikube-integration/21409-364627/.minikube/certs/368634_empty.pem, impossibly tiny 0 bytes
	I1014 20:17:58.349061  421087 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-364627/.minikube/certs/ca-key.pem (1675 bytes)
	I1014 20:17:58.349092  421087 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-364627/.minikube/certs/ca.pem (1082 bytes)
	I1014 20:17:58.349124  421087 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-364627/.minikube/certs/cert.pem (1123 bytes)
	I1014 20:17:58.349156  421087 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-364627/.minikube/certs/key.pem (1675 bytes)
	I1014 20:17:58.349211  421087 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-364627/.minikube/files/etc/ssl/certs/3686342.pem (1708 bytes)
	I1014 20:17:58.349821  421087 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-364627/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1014 20:17:58.388851  421087 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-364627/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1014 20:17:58.428916  421087 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-364627/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1014 20:17:58.460766  421087 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-364627/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1014 20:17:58.492261  421087 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/flannel-880673/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1014 20:17:58.528396  421087 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/flannel-880673/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1014 20:17:58.561053  421087 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/flannel-880673/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1014 20:17:58.593742  421087 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/flannel-880673/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1014 20:17:58.625414  421087 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-364627/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1014 20:17:58.659235  421087 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-364627/.minikube/certs/368634.pem --> /usr/share/ca-certificates/368634.pem (1338 bytes)
	I1014 20:17:58.691366  421087 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-364627/.minikube/files/etc/ssl/certs/3686342.pem --> /usr/share/ca-certificates/3686342.pem (1708 bytes)
	I1014 20:17:58.725392  421087 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1014 20:17:58.747071  421087 ssh_runner.go:195] Run: openssl version
	I1014 20:17:58.754399  421087 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1014 20:17:58.768104  421087 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1014 20:17:58.773406  421087 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 14 19:11 /usr/share/ca-certificates/minikubeCA.pem
	I1014 20:17:58.773478  421087 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1014 20:17:58.781682  421087 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1014 20:17:58.798231  421087 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/368634.pem && ln -fs /usr/share/ca-certificates/368634.pem /etc/ssl/certs/368634.pem"
	I1014 20:17:58.812733  421087 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/368634.pem
	I1014 20:17:58.819794  421087 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 14 19:18 /usr/share/ca-certificates/368634.pem
	I1014 20:17:58.819891  421087 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/368634.pem
	I1014 20:17:58.830331  421087 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/368634.pem /etc/ssl/certs/51391683.0"
	I1014 20:17:58.846865  421087 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3686342.pem && ln -fs /usr/share/ca-certificates/3686342.pem /etc/ssl/certs/3686342.pem"
	I1014 20:17:58.864053  421087 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3686342.pem
	I1014 20:17:58.871566  421087 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 14 19:18 /usr/share/ca-certificates/3686342.pem
	I1014 20:17:58.871657  421087 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3686342.pem
	I1014 20:17:58.885584  421087 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3686342.pem /etc/ssl/certs/3ec20f2e.0"
	I1014 20:17:58.904825  421087 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1014 20:17:58.912981  421087 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1014 20:17:58.913041  421087 kubeadm.go:400] StartCluster: {Name:flannel-880673 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1
ClusterName:flannel-880673 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP:192.168.39.78 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirro
r: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1014 20:17:58.913135  421087 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1014 20:17:58.913215  421087 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1014 20:17:58.965501  421087 cri.go:89] found id: ""
	I1014 20:17:58.965586  421087 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1014 20:17:58.978845  421087 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1014 20:17:58.991748  421087 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1014 20:17:59.004199  421087 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1014 20:17:59.004222  421087 kubeadm.go:157] found existing configuration files:
	
	I1014 20:17:59.004271  421087 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1014 20:17:59.016492  421087 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1014 20:17:59.016555  421087 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1014 20:17:59.029171  421087 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1014 20:17:59.041661  421087 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1014 20:17:59.041733  421087 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1014 20:17:59.056418  421087 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1014 20:17:59.068235  421087 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1014 20:17:59.068381  421087 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1014 20:17:59.081198  421087 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1014 20:17:59.093134  421087 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1014 20:17:59.093205  421087 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1014 20:17:59.105941  421087 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1014 20:17:59.264636  421087 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W1014 20:17:58.657774  418230 pod_ready.go:104] pod "coredns-66bc5c9577-489jr" is not "Ready", error: <nil>
	W1014 20:18:01.139015  418230 pod_ready.go:104] pod "coredns-66bc5c9577-489jr" is not "Ready", error: <nil>
	I1014 20:17:58.577022  421402 main.go:141] libmachine: (bridge-880673) DBG | domain bridge-880673 has defined MAC address 52:54:00:21:00:20 in network mk-bridge-880673
	I1014 20:17:58.577800  421402 main.go:141] libmachine: (bridge-880673) DBG | no network interface addresses found for domain bridge-880673 (source=lease)
	I1014 20:17:58.577825  421402 main.go:141] libmachine: (bridge-880673) DBG | trying to list again with source=arp
	I1014 20:17:58.578168  421402 main.go:141] libmachine: (bridge-880673) DBG | unable to find current IP address of domain bridge-880673 in network mk-bridge-880673 (interfaces detected: [])
	I1014 20:17:58.578194  421402 main.go:141] libmachine: (bridge-880673) DBG | I1014 20:17:58.578154  422003 retry.go:31] will retry after 1.402636054s: waiting for domain to come up
	I1014 20:17:59.982567  421402 main.go:141] libmachine: (bridge-880673) DBG | domain bridge-880673 has defined MAC address 52:54:00:21:00:20 in network mk-bridge-880673
	I1014 20:17:59.983205  421402 main.go:141] libmachine: (bridge-880673) DBG | no network interface addresses found for domain bridge-880673 (source=lease)
	I1014 20:17:59.983237  421402 main.go:141] libmachine: (bridge-880673) DBG | trying to list again with source=arp
	I1014 20:17:59.983594  421402 main.go:141] libmachine: (bridge-880673) DBG | unable to find current IP address of domain bridge-880673 in network mk-bridge-880673 (interfaces detected: [])
	I1014 20:17:59.983640  421402 main.go:141] libmachine: (bridge-880673) DBG | I1014 20:17:59.983590  422003 retry.go:31] will retry after 2.221969011s: waiting for domain to come up
	I1014 20:18:02.208011  421402 main.go:141] libmachine: (bridge-880673) DBG | domain bridge-880673 has defined MAC address 52:54:00:21:00:20 in network mk-bridge-880673
	I1014 20:18:02.209248  421402 main.go:141] libmachine: (bridge-880673) DBG | no network interface addresses found for domain bridge-880673 (source=lease)
	I1014 20:18:02.209362  421402 main.go:141] libmachine: (bridge-880673) DBG | trying to list again with source=arp
	I1014 20:18:02.209796  421402 main.go:141] libmachine: (bridge-880673) DBG | unable to find current IP address of domain bridge-880673 in network mk-bridge-880673 (interfaces detected: [])
	I1014 20:18:02.209827  421402 main.go:141] libmachine: (bridge-880673) DBG | I1014 20:18:02.209782  422003 retry.go:31] will retry after 2.101932185s: waiting for domain to come up
	W1014 20:18:03.636759  418230 pod_ready.go:104] pod "coredns-66bc5c9577-489jr" is not "Ready", error: <nil>
	W1014 20:18:05.637870  418230 pod_ready.go:104] pod "coredns-66bc5c9577-489jr" is not "Ready", error: <nil>
	I1014 20:18:04.313776  421402 main.go:141] libmachine: (bridge-880673) DBG | domain bridge-880673 has defined MAC address 52:54:00:21:00:20 in network mk-bridge-880673
	I1014 20:18:04.314632  421402 main.go:141] libmachine: (bridge-880673) DBG | no network interface addresses found for domain bridge-880673 (source=lease)
	I1014 20:18:04.314664  421402 main.go:141] libmachine: (bridge-880673) DBG | trying to list again with source=arp
	I1014 20:18:04.315124  421402 main.go:141] libmachine: (bridge-880673) DBG | unable to find current IP address of domain bridge-880673 in network mk-bridge-880673 (interfaces detected: [])
	I1014 20:18:04.315159  421402 main.go:141] libmachine: (bridge-880673) DBG | I1014 20:18:04.315054  422003 retry.go:31] will retry after 2.342959019s: waiting for domain to come up
	I1014 20:18:06.660001  421402 main.go:141] libmachine: (bridge-880673) DBG | domain bridge-880673 has defined MAC address 52:54:00:21:00:20 in network mk-bridge-880673
	I1014 20:18:06.660763  421402 main.go:141] libmachine: (bridge-880673) DBG | no network interface addresses found for domain bridge-880673 (source=lease)
	I1014 20:18:06.660792  421402 main.go:141] libmachine: (bridge-880673) DBG | trying to list again with source=arp
	I1014 20:18:06.661224  421402 main.go:141] libmachine: (bridge-880673) DBG | unable to find current IP address of domain bridge-880673 in network mk-bridge-880673 (interfaces detected: [])
	I1014 20:18:06.661254  421402 main.go:141] libmachine: (bridge-880673) DBG | I1014 20:18:06.661182  422003 retry.go:31] will retry after 3.64841419s: waiting for domain to come up
	I1014 20:18:11.536374  421087 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1014 20:18:11.536506  421087 kubeadm.go:318] [preflight] Running pre-flight checks
	I1014 20:18:11.536652  421087 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1014 20:18:11.536782  421087 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1014 20:18:11.536904  421087 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1014 20:18:11.536982  421087 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1014 20:18:11.538339  421087 out.go:252]   - Generating certificates and keys ...
	I1014 20:18:11.538437  421087 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1014 20:18:11.538511  421087 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1014 20:18:11.538632  421087 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1014 20:18:11.538736  421087 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1014 20:18:11.538828  421087 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1014 20:18:11.538899  421087 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1014 20:18:11.538991  421087 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1014 20:18:11.539177  421087 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [flannel-880673 localhost] and IPs [192.168.39.78 127.0.0.1 ::1]
	I1014 20:18:11.539273  421087 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1014 20:18:11.539461  421087 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [flannel-880673 localhost] and IPs [192.168.39.78 127.0.0.1 ::1]
	I1014 20:18:11.539551  421087 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1014 20:18:11.539647  421087 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1014 20:18:11.539718  421087 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1014 20:18:11.539793  421087 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1014 20:18:11.539860  421087 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1014 20:18:11.539948  421087 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1014 20:18:11.540034  421087 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1014 20:18:11.540120  421087 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1014 20:18:11.540199  421087 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1014 20:18:11.540345  421087 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1014 20:18:11.540460  421087 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1014 20:18:11.542094  421087 out.go:252]   - Booting up control plane ...
	I1014 20:18:11.542205  421087 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1014 20:18:11.542352  421087 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1014 20:18:11.542478  421087 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1014 20:18:11.542648  421087 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1014 20:18:11.542786  421087 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1014 20:18:11.542936  421087 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1014 20:18:11.543072  421087 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1014 20:18:11.543132  421087 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1014 20:18:11.543328  421087 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1014 20:18:11.543489  421087 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1014 20:18:11.543572  421087 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.001234355s
	I1014 20:18:11.543691  421087 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1014 20:18:11.543814  421087 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.39.78:8443/livez
	I1014 20:18:11.543944  421087 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1014 20:18:11.544060  421087 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1014 20:18:11.544183  421087 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 3.076803108s
	I1014 20:18:11.544288  421087 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 4.033725081s
	I1014 20:18:11.544397  421087 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 6.001911853s
	I1014 20:18:11.544548  421087 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1014 20:18:11.544745  421087 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1014 20:18:11.544849  421087 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1014 20:18:11.545113  421087 kubeadm.go:318] [mark-control-plane] Marking the node flannel-880673 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1014 20:18:11.545193  421087 kubeadm.go:318] [bootstrap-token] Using token: mb1gep.qj5bz6jgot4fwn77
	I1014 20:18:11.547493  421087 out.go:252]   - Configuring RBAC rules ...
	I1014 20:18:11.547615  421087 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1014 20:18:11.547742  421087 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1014 20:18:11.547955  421087 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1014 20:18:11.548135  421087 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1014 20:18:11.548274  421087 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1014 20:18:11.548423  421087 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1014 20:18:11.548592  421087 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1014 20:18:11.548666  421087 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1014 20:18:11.548750  421087 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1014 20:18:11.548772  421087 kubeadm.go:318] 
	I1014 20:18:11.548854  421087 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1014 20:18:11.548868  421087 kubeadm.go:318] 
	I1014 20:18:11.548957  421087 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1014 20:18:11.548969  421087 kubeadm.go:318] 
	I1014 20:18:11.549017  421087 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1014 20:18:11.549103  421087 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1014 20:18:11.549161  421087 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1014 20:18:11.549173  421087 kubeadm.go:318] 
	I1014 20:18:11.549239  421087 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1014 20:18:11.549250  421087 kubeadm.go:318] 
	I1014 20:18:11.549352  421087 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1014 20:18:11.549378  421087 kubeadm.go:318] 
	I1014 20:18:11.549446  421087 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1014 20:18:11.549569  421087 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1014 20:18:11.549669  421087 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1014 20:18:11.549678  421087 kubeadm.go:318] 
	I1014 20:18:11.549781  421087 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1014 20:18:11.549879  421087 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1014 20:18:11.549892  421087 kubeadm.go:318] 
	I1014 20:18:11.549998  421087 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token mb1gep.qj5bz6jgot4fwn77 \
	I1014 20:18:11.550130  421087 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:a62c4f11982899337eae6d2ac06abdf2a9e3ffb256514381b9172598c708cb1d \
	I1014 20:18:11.550176  421087 kubeadm.go:318] 	--control-plane 
	I1014 20:18:11.550185  421087 kubeadm.go:318] 
	I1014 20:18:11.550261  421087 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1014 20:18:11.550268  421087 kubeadm.go:318] 
	I1014 20:18:11.550378  421087 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token mb1gep.qj5bz6jgot4fwn77 \
	I1014 20:18:11.550496  421087 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:a62c4f11982899337eae6d2ac06abdf2a9e3ffb256514381b9172598c708cb1d 
	I1014 20:18:11.550521  421087 cni.go:84] Creating CNI manager for "flannel"
	I1014 20:18:11.552888  421087 out.go:179] * Configuring Flannel (Container Networking Interface) ...
	W1014 20:18:08.137637  418230 pod_ready.go:104] pod "coredns-66bc5c9577-489jr" is not "Ready", error: <nil>
	W1014 20:18:10.138153  418230 pod_ready.go:104] pod "coredns-66bc5c9577-489jr" is not "Ready", error: <nil>
	I1014 20:18:10.313428  421402 main.go:141] libmachine: (bridge-880673) DBG | domain bridge-880673 has defined MAC address 52:54:00:21:00:20 in network mk-bridge-880673
	I1014 20:18:10.314180  421402 main.go:141] libmachine: (bridge-880673) found domain IP: 192.168.61.105
	I1014 20:18:10.314203  421402 main.go:141] libmachine: (bridge-880673) reserving static IP address...
	I1014 20:18:10.314217  421402 main.go:141] libmachine: (bridge-880673) DBG | domain bridge-880673 has current primary IP address 192.168.61.105 and MAC address 52:54:00:21:00:20 in network mk-bridge-880673
	I1014 20:18:10.314677  421402 main.go:141] libmachine: (bridge-880673) DBG | unable to find host DHCP lease matching {name: "bridge-880673", mac: "52:54:00:21:00:20", ip: "192.168.61.105"} in network mk-bridge-880673
	I1014 20:18:10.548894  421402 main.go:141] libmachine: (bridge-880673) DBG | Getting to WaitForSSH function...
	I1014 20:18:10.548930  421402 main.go:141] libmachine: (bridge-880673) reserved static IP address 192.168.61.105 for domain bridge-880673
	I1014 20:18:10.548965  421402 main.go:141] libmachine: (bridge-880673) waiting for SSH...
	I1014 20:18:10.552436  421402 main.go:141] libmachine: (bridge-880673) DBG | domain bridge-880673 has defined MAC address 52:54:00:21:00:20 in network mk-bridge-880673
	I1014 20:18:10.552981  421402 main.go:141] libmachine: (bridge-880673) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:00:20", ip: ""} in network mk-bridge-880673: {Iface:virbr3 ExpiryTime:2025-10-14 21:18:07 +0000 UTC Type:0 Mac:52:54:00:21:00:20 Iaid: IPaddr:192.168.61.105 Prefix:24 Hostname:minikube Clientid:01:52:54:00:21:00:20}
	I1014 20:18:10.553012  421402 main.go:141] libmachine: (bridge-880673) DBG | domain bridge-880673 has defined IP address 192.168.61.105 and MAC address 52:54:00:21:00:20 in network mk-bridge-880673
	I1014 20:18:10.553268  421402 main.go:141] libmachine: (bridge-880673) DBG | Using SSH client type: external
	I1014 20:18:10.553294  421402 main.go:141] libmachine: (bridge-880673) DBG | Using SSH private key: /home/jenkins/minikube-integration/21409-364627/.minikube/machines/bridge-880673/id_rsa (-rw-------)
	I1014 20:18:10.553354  421402 main.go:141] libmachine: (bridge-880673) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.105 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/21409-364627/.minikube/machines/bridge-880673/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1014 20:18:10.553373  421402 main.go:141] libmachine: (bridge-880673) DBG | About to run SSH command:
	I1014 20:18:10.553390  421402 main.go:141] libmachine: (bridge-880673) DBG | exit 0
	I1014 20:18:10.685704  421402 main.go:141] libmachine: (bridge-880673) DBG | SSH cmd err, output: <nil>: 
	I1014 20:18:10.686022  421402 main.go:141] libmachine: (bridge-880673) domain creation complete
	I1014 20:18:10.686447  421402 main.go:141] libmachine: (bridge-880673) Calling .GetConfigRaw
	I1014 20:18:10.687088  421402 main.go:141] libmachine: (bridge-880673) Calling .DriverName
	I1014 20:18:10.687355  421402 main.go:141] libmachine: (bridge-880673) Calling .DriverName
	I1014 20:18:10.687542  421402 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1014 20:18:10.687560  421402 main.go:141] libmachine: (bridge-880673) Calling .GetState
	I1014 20:18:10.689236  421402 main.go:141] libmachine: Detecting operating system of created instance...
	I1014 20:18:10.689253  421402 main.go:141] libmachine: Waiting for SSH to be available...
	I1014 20:18:10.689261  421402 main.go:141] libmachine: Getting to WaitForSSH function...
	I1014 20:18:10.689269  421402 main.go:141] libmachine: (bridge-880673) Calling .GetSSHHostname
	I1014 20:18:10.692164  421402 main.go:141] libmachine: (bridge-880673) DBG | domain bridge-880673 has defined MAC address 52:54:00:21:00:20 in network mk-bridge-880673
	I1014 20:18:10.692638  421402 main.go:141] libmachine: (bridge-880673) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:00:20", ip: ""} in network mk-bridge-880673: {Iface:virbr3 ExpiryTime:2025-10-14 21:18:07 +0000 UTC Type:0 Mac:52:54:00:21:00:20 Iaid: IPaddr:192.168.61.105 Prefix:24 Hostname:bridge-880673 Clientid:01:52:54:00:21:00:20}
	I1014 20:18:10.692667  421402 main.go:141] libmachine: (bridge-880673) DBG | domain bridge-880673 has defined IP address 192.168.61.105 and MAC address 52:54:00:21:00:20 in network mk-bridge-880673
	I1014 20:18:10.692889  421402 main.go:141] libmachine: (bridge-880673) Calling .GetSSHPort
	I1014 20:18:10.693092  421402 main.go:141] libmachine: (bridge-880673) Calling .GetSSHKeyPath
	I1014 20:18:10.693260  421402 main.go:141] libmachine: (bridge-880673) Calling .GetSSHKeyPath
	I1014 20:18:10.693451  421402 main.go:141] libmachine: (bridge-880673) Calling .GetSSHUsername
	I1014 20:18:10.693655  421402 main.go:141] libmachine: Using SSH client type: native
	I1014 20:18:10.693975  421402 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.61.105 22 <nil> <nil>}
	I1014 20:18:10.693994  421402 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1014 20:18:10.800086  421402 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1014 20:18:10.800112  421402 main.go:141] libmachine: Detecting the provisioner...
	I1014 20:18:10.800125  421402 main.go:141] libmachine: (bridge-880673) Calling .GetSSHHostname
	I1014 20:18:10.805064  421402 main.go:141] libmachine: (bridge-880673) DBG | domain bridge-880673 has defined MAC address 52:54:00:21:00:20 in network mk-bridge-880673
	I1014 20:18:10.805804  421402 main.go:141] libmachine: (bridge-880673) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:00:20", ip: ""} in network mk-bridge-880673: {Iface:virbr3 ExpiryTime:2025-10-14 21:18:07 +0000 UTC Type:0 Mac:52:54:00:21:00:20 Iaid: IPaddr:192.168.61.105 Prefix:24 Hostname:bridge-880673 Clientid:01:52:54:00:21:00:20}
	I1014 20:18:10.805841  421402 main.go:141] libmachine: (bridge-880673) DBG | domain bridge-880673 has defined IP address 192.168.61.105 and MAC address 52:54:00:21:00:20 in network mk-bridge-880673
	I1014 20:18:10.806339  421402 main.go:141] libmachine: (bridge-880673) Calling .GetSSHPort
	I1014 20:18:10.806645  421402 main.go:141] libmachine: (bridge-880673) Calling .GetSSHKeyPath
	I1014 20:18:10.806900  421402 main.go:141] libmachine: (bridge-880673) Calling .GetSSHKeyPath
	I1014 20:18:10.807108  421402 main.go:141] libmachine: (bridge-880673) Calling .GetSSHUsername
	I1014 20:18:10.807424  421402 main.go:141] libmachine: Using SSH client type: native
	I1014 20:18:10.807729  421402 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.61.105 22 <nil> <nil>}
	I1014 20:18:10.807746  421402 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1014 20:18:10.924938  421402 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2025.02-dirty
	ID=buildroot
	VERSION_ID=2025.02
	PRETTY_NAME="Buildroot 2025.02"
	
	I1014 20:18:10.925034  421402 main.go:141] libmachine: found compatible host: buildroot
	I1014 20:18:10.925048  421402 main.go:141] libmachine: Provisioning with buildroot...
	I1014 20:18:10.925060  421402 main.go:141] libmachine: (bridge-880673) Calling .GetMachineName
	I1014 20:18:10.925444  421402 buildroot.go:166] provisioning hostname "bridge-880673"
	I1014 20:18:10.925485  421402 main.go:141] libmachine: (bridge-880673) Calling .GetMachineName
	I1014 20:18:10.925766  421402 main.go:141] libmachine: (bridge-880673) Calling .GetSSHHostname
	I1014 20:18:10.929615  421402 main.go:141] libmachine: (bridge-880673) DBG | domain bridge-880673 has defined MAC address 52:54:00:21:00:20 in network mk-bridge-880673
	I1014 20:18:10.930124  421402 main.go:141] libmachine: (bridge-880673) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:00:20", ip: ""} in network mk-bridge-880673: {Iface:virbr3 ExpiryTime:2025-10-14 21:18:07 +0000 UTC Type:0 Mac:52:54:00:21:00:20 Iaid: IPaddr:192.168.61.105 Prefix:24 Hostname:bridge-880673 Clientid:01:52:54:00:21:00:20}
	I1014 20:18:10.930176  421402 main.go:141] libmachine: (bridge-880673) DBG | domain bridge-880673 has defined IP address 192.168.61.105 and MAC address 52:54:00:21:00:20 in network mk-bridge-880673
	I1014 20:18:10.930503  421402 main.go:141] libmachine: (bridge-880673) Calling .GetSSHPort
	I1014 20:18:10.930771  421402 main.go:141] libmachine: (bridge-880673) Calling .GetSSHKeyPath
	I1014 20:18:10.930988  421402 main.go:141] libmachine: (bridge-880673) Calling .GetSSHKeyPath
	I1014 20:18:10.931168  421402 main.go:141] libmachine: (bridge-880673) Calling .GetSSHUsername
	I1014 20:18:10.931376  421402 main.go:141] libmachine: Using SSH client type: native
	I1014 20:18:10.931687  421402 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.61.105 22 <nil> <nil>}
	I1014 20:18:10.931711  421402 main.go:141] libmachine: About to run SSH command:
	sudo hostname bridge-880673 && echo "bridge-880673" | sudo tee /etc/hostname
	I1014 20:18:11.067523  421402 main.go:141] libmachine: SSH cmd err, output: <nil>: bridge-880673
	
	I1014 20:18:11.067572  421402 main.go:141] libmachine: (bridge-880673) Calling .GetSSHHostname
	I1014 20:18:11.072145  421402 main.go:141] libmachine: (bridge-880673) DBG | domain bridge-880673 has defined MAC address 52:54:00:21:00:20 in network mk-bridge-880673
	I1014 20:18:11.072622  421402 main.go:141] libmachine: (bridge-880673) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:00:20", ip: ""} in network mk-bridge-880673: {Iface:virbr3 ExpiryTime:2025-10-14 21:18:07 +0000 UTC Type:0 Mac:52:54:00:21:00:20 Iaid: IPaddr:192.168.61.105 Prefix:24 Hostname:bridge-880673 Clientid:01:52:54:00:21:00:20}
	I1014 20:18:11.072658  421402 main.go:141] libmachine: (bridge-880673) DBG | domain bridge-880673 has defined IP address 192.168.61.105 and MAC address 52:54:00:21:00:20 in network mk-bridge-880673
	I1014 20:18:11.072970  421402 main.go:141] libmachine: (bridge-880673) Calling .GetSSHPort
	I1014 20:18:11.073270  421402 main.go:141] libmachine: (bridge-880673) Calling .GetSSHKeyPath
	I1014 20:18:11.073503  421402 main.go:141] libmachine: (bridge-880673) Calling .GetSSHKeyPath
	I1014 20:18:11.073722  421402 main.go:141] libmachine: (bridge-880673) Calling .GetSSHUsername
	I1014 20:18:11.073955  421402 main.go:141] libmachine: Using SSH client type: native
	I1014 20:18:11.074245  421402 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.61.105 22 <nil> <nil>}
	I1014 20:18:11.074276  421402 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sbridge-880673' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 bridge-880673/g' /etc/hosts;
				else 
					echo '127.0.1.1 bridge-880673' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1014 20:18:11.198456  421402 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1014 20:18:11.198494  421402 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21409-364627/.minikube CaCertPath:/home/jenkins/minikube-integration/21409-364627/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21409-364627/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21409-364627/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21409-364627/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21409-364627/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21409-364627/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21409-364627/.minikube}
	I1014 20:18:11.198543  421402 buildroot.go:174] setting up certificates
	I1014 20:18:11.198557  421402 provision.go:84] configureAuth start
	I1014 20:18:11.198577  421402 main.go:141] libmachine: (bridge-880673) Calling .GetMachineName
	I1014 20:18:11.198927  421402 main.go:141] libmachine: (bridge-880673) Calling .GetIP
	I1014 20:18:11.202802  421402 main.go:141] libmachine: (bridge-880673) DBG | domain bridge-880673 has defined MAC address 52:54:00:21:00:20 in network mk-bridge-880673
	I1014 20:18:11.203150  421402 main.go:141] libmachine: (bridge-880673) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:00:20", ip: ""} in network mk-bridge-880673: {Iface:virbr3 ExpiryTime:2025-10-14 21:18:07 +0000 UTC Type:0 Mac:52:54:00:21:00:20 Iaid: IPaddr:192.168.61.105 Prefix:24 Hostname:bridge-880673 Clientid:01:52:54:00:21:00:20}
	I1014 20:18:11.203189  421402 main.go:141] libmachine: (bridge-880673) DBG | domain bridge-880673 has defined IP address 192.168.61.105 and MAC address 52:54:00:21:00:20 in network mk-bridge-880673
	I1014 20:18:11.203382  421402 main.go:141] libmachine: (bridge-880673) Calling .GetSSHHostname
	I1014 20:18:11.207637  421402 main.go:141] libmachine: (bridge-880673) DBG | domain bridge-880673 has defined MAC address 52:54:00:21:00:20 in network mk-bridge-880673
	I1014 20:18:11.208132  421402 main.go:141] libmachine: (bridge-880673) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:00:20", ip: ""} in network mk-bridge-880673: {Iface:virbr3 ExpiryTime:2025-10-14 21:18:07 +0000 UTC Type:0 Mac:52:54:00:21:00:20 Iaid: IPaddr:192.168.61.105 Prefix:24 Hostname:bridge-880673 Clientid:01:52:54:00:21:00:20}
	I1014 20:18:11.208159  421402 main.go:141] libmachine: (bridge-880673) DBG | domain bridge-880673 has defined IP address 192.168.61.105 and MAC address 52:54:00:21:00:20 in network mk-bridge-880673
	I1014 20:18:11.208374  421402 provision.go:143] copyHostCerts
	I1014 20:18:11.208450  421402 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-364627/.minikube/ca.pem, removing ...
	I1014 20:18:11.208480  421402 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-364627/.minikube/ca.pem
	I1014 20:18:11.208587  421402 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-364627/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21409-364627/.minikube/ca.pem (1082 bytes)
	I1014 20:18:11.208749  421402 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-364627/.minikube/cert.pem, removing ...
	I1014 20:18:11.208768  421402 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-364627/.minikube/cert.pem
	I1014 20:18:11.208818  421402 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-364627/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21409-364627/.minikube/cert.pem (1123 bytes)
	I1014 20:18:11.208923  421402 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-364627/.minikube/key.pem, removing ...
	I1014 20:18:11.208942  421402 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-364627/.minikube/key.pem
	I1014 20:18:11.208982  421402 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-364627/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21409-364627/.minikube/key.pem (1675 bytes)
	I1014 20:18:11.209070  421402 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21409-364627/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21409-364627/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21409-364627/.minikube/certs/ca-key.pem org=jenkins.bridge-880673 san=[127.0.0.1 192.168.61.105 bridge-880673 localhost minikube]
	I1014 20:18:11.337710  421402 provision.go:177] copyRemoteCerts
	I1014 20:18:11.337789  421402 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1014 20:18:11.337818  421402 main.go:141] libmachine: (bridge-880673) Calling .GetSSHHostname
	I1014 20:18:11.340906  421402 main.go:141] libmachine: (bridge-880673) DBG | domain bridge-880673 has defined MAC address 52:54:00:21:00:20 in network mk-bridge-880673
	I1014 20:18:11.341359  421402 main.go:141] libmachine: (bridge-880673) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:00:20", ip: ""} in network mk-bridge-880673: {Iface:virbr3 ExpiryTime:2025-10-14 21:18:07 +0000 UTC Type:0 Mac:52:54:00:21:00:20 Iaid: IPaddr:192.168.61.105 Prefix:24 Hostname:bridge-880673 Clientid:01:52:54:00:21:00:20}
	I1014 20:18:11.341393  421402 main.go:141] libmachine: (bridge-880673) DBG | domain bridge-880673 has defined IP address 192.168.61.105 and MAC address 52:54:00:21:00:20 in network mk-bridge-880673
	I1014 20:18:11.341577  421402 main.go:141] libmachine: (bridge-880673) Calling .GetSSHPort
	I1014 20:18:11.341801  421402 main.go:141] libmachine: (bridge-880673) Calling .GetSSHKeyPath
	I1014 20:18:11.341949  421402 main.go:141] libmachine: (bridge-880673) Calling .GetSSHUsername
	I1014 20:18:11.342068  421402 sshutil.go:53] new ssh client: &{IP:192.168.61.105 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21409-364627/.minikube/machines/bridge-880673/id_rsa Username:docker}
	I1014 20:18:11.426929  421402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-364627/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1014 20:18:11.460762  421402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-364627/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1014 20:18:11.493090  421402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-364627/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1014 20:18:11.527196  421402 provision.go:87] duration metric: took 328.61751ms to configureAuth
	I1014 20:18:11.527242  421402 buildroot.go:189] setting minikube options for container-runtime
	I1014 20:18:11.527513  421402 config.go:182] Loaded profile config "bridge-880673": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1014 20:18:11.527698  421402 main.go:141] libmachine: (bridge-880673) Calling .GetSSHHostname
	I1014 20:18:11.531881  421402 main.go:141] libmachine: (bridge-880673) DBG | domain bridge-880673 has defined MAC address 52:54:00:21:00:20 in network mk-bridge-880673
	I1014 20:18:11.532435  421402 main.go:141] libmachine: (bridge-880673) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:00:20", ip: ""} in network mk-bridge-880673: {Iface:virbr3 ExpiryTime:2025-10-14 21:18:07 +0000 UTC Type:0 Mac:52:54:00:21:00:20 Iaid: IPaddr:192.168.61.105 Prefix:24 Hostname:bridge-880673 Clientid:01:52:54:00:21:00:20}
	I1014 20:18:11.532475  421402 main.go:141] libmachine: (bridge-880673) DBG | domain bridge-880673 has defined IP address 192.168.61.105 and MAC address 52:54:00:21:00:20 in network mk-bridge-880673
	I1014 20:18:11.532855  421402 main.go:141] libmachine: (bridge-880673) Calling .GetSSHPort
	I1014 20:18:11.533121  421402 main.go:141] libmachine: (bridge-880673) Calling .GetSSHKeyPath
	I1014 20:18:11.533363  421402 main.go:141] libmachine: (bridge-880673) Calling .GetSSHKeyPath
	I1014 20:18:11.533562  421402 main.go:141] libmachine: (bridge-880673) Calling .GetSSHUsername
	I1014 20:18:11.533772  421402 main.go:141] libmachine: Using SSH client type: native
	I1014 20:18:11.534056  421402 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.61.105 22 <nil> <nil>}
	I1014 20:18:11.534080  421402 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1014 20:18:11.810259  421402 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1014 20:18:11.810308  421402 main.go:141] libmachine: Checking connection to Docker...
	I1014 20:18:11.810336  421402 main.go:141] libmachine: (bridge-880673) Calling .GetURL
	I1014 20:18:11.812149  421402 main.go:141] libmachine: (bridge-880673) DBG | using libvirt version 8000000
	I1014 20:18:11.815234  421402 main.go:141] libmachine: (bridge-880673) DBG | domain bridge-880673 has defined MAC address 52:54:00:21:00:20 in network mk-bridge-880673
	I1014 20:18:11.815595  421402 main.go:141] libmachine: (bridge-880673) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:00:20", ip: ""} in network mk-bridge-880673: {Iface:virbr3 ExpiryTime:2025-10-14 21:18:07 +0000 UTC Type:0 Mac:52:54:00:21:00:20 Iaid: IPaddr:192.168.61.105 Prefix:24 Hostname:bridge-880673 Clientid:01:52:54:00:21:00:20}
	I1014 20:18:11.815640  421402 main.go:141] libmachine: (bridge-880673) DBG | domain bridge-880673 has defined IP address 192.168.61.105 and MAC address 52:54:00:21:00:20 in network mk-bridge-880673
	I1014 20:18:11.815928  421402 main.go:141] libmachine: Docker is up and running!
	I1014 20:18:11.815951  421402 main.go:141] libmachine: Reticulating splines...
	I1014 20:18:11.815960  421402 client.go:171] duration metric: took 20.176929121s to LocalClient.Create
	I1014 20:18:11.815991  421402 start.go:167] duration metric: took 20.177016841s to libmachine.API.Create "bridge-880673"
	I1014 20:18:11.816003  421402 start.go:293] postStartSetup for "bridge-880673" (driver="kvm2")
	I1014 20:18:11.816014  421402 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1014 20:18:11.816042  421402 main.go:141] libmachine: (bridge-880673) Calling .DriverName
	I1014 20:18:11.816326  421402 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1014 20:18:11.816366  421402 main.go:141] libmachine: (bridge-880673) Calling .GetSSHHostname
	I1014 20:18:11.819358  421402 main.go:141] libmachine: (bridge-880673) DBG | domain bridge-880673 has defined MAC address 52:54:00:21:00:20 in network mk-bridge-880673
	I1014 20:18:11.819831  421402 main.go:141] libmachine: (bridge-880673) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:00:20", ip: ""} in network mk-bridge-880673: {Iface:virbr3 ExpiryTime:2025-10-14 21:18:07 +0000 UTC Type:0 Mac:52:54:00:21:00:20 Iaid: IPaddr:192.168.61.105 Prefix:24 Hostname:bridge-880673 Clientid:01:52:54:00:21:00:20}
	I1014 20:18:11.819858  421402 main.go:141] libmachine: (bridge-880673) DBG | domain bridge-880673 has defined IP address 192.168.61.105 and MAC address 52:54:00:21:00:20 in network mk-bridge-880673
	I1014 20:18:11.820144  421402 main.go:141] libmachine: (bridge-880673) Calling .GetSSHPort
	I1014 20:18:11.820458  421402 main.go:141] libmachine: (bridge-880673) Calling .GetSSHKeyPath
	I1014 20:18:11.820707  421402 main.go:141] libmachine: (bridge-880673) Calling .GetSSHUsername
	I1014 20:18:11.820915  421402 sshutil.go:53] new ssh client: &{IP:192.168.61.105 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21409-364627/.minikube/machines/bridge-880673/id_rsa Username:docker}
	I1014 20:18:11.907189  421402 ssh_runner.go:195] Run: cat /etc/os-release
	I1014 20:18:11.912534  421402 info.go:137] Remote host: Buildroot 2025.02
	I1014 20:18:11.912577  421402 filesync.go:126] Scanning /home/jenkins/minikube-integration/21409-364627/.minikube/addons for local assets ...
	I1014 20:18:11.912663  421402 filesync.go:126] Scanning /home/jenkins/minikube-integration/21409-364627/.minikube/files for local assets ...
	I1014 20:18:11.912778  421402 filesync.go:149] local asset: /home/jenkins/minikube-integration/21409-364627/.minikube/files/etc/ssl/certs/3686342.pem -> 3686342.pem in /etc/ssl/certs
	I1014 20:18:11.912956  421402 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1014 20:18:11.929717  421402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-364627/.minikube/files/etc/ssl/certs/3686342.pem --> /etc/ssl/certs/3686342.pem (1708 bytes)
	I1014 20:18:11.964912  421402 start.go:296] duration metric: took 148.891523ms for postStartSetup
	I1014 20:18:11.964972  421402 main.go:141] libmachine: (bridge-880673) Calling .GetConfigRaw
	I1014 20:18:11.965780  421402 main.go:141] libmachine: (bridge-880673) Calling .GetIP
	I1014 20:18:11.969258  421402 main.go:141] libmachine: (bridge-880673) DBG | domain bridge-880673 has defined MAC address 52:54:00:21:00:20 in network mk-bridge-880673
	I1014 20:18:11.969713  421402 main.go:141] libmachine: (bridge-880673) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:00:20", ip: ""} in network mk-bridge-880673: {Iface:virbr3 ExpiryTime:2025-10-14 21:18:07 +0000 UTC Type:0 Mac:52:54:00:21:00:20 Iaid: IPaddr:192.168.61.105 Prefix:24 Hostname:bridge-880673 Clientid:01:52:54:00:21:00:20}
	I1014 20:18:11.969742  421402 main.go:141] libmachine: (bridge-880673) DBG | domain bridge-880673 has defined IP address 192.168.61.105 and MAC address 52:54:00:21:00:20 in network mk-bridge-880673
	I1014 20:18:11.970017  421402 profile.go:143] Saving config to /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/bridge-880673/config.json ...
	I1014 20:18:11.970236  421402 start.go:128] duration metric: took 20.355729631s to createHost
	I1014 20:18:11.970263  421402 main.go:141] libmachine: (bridge-880673) Calling .GetSSHHostname
	I1014 20:18:11.973266  421402 main.go:141] libmachine: (bridge-880673) DBG | domain bridge-880673 has defined MAC address 52:54:00:21:00:20 in network mk-bridge-880673
	I1014 20:18:11.973695  421402 main.go:141] libmachine: (bridge-880673) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:00:20", ip: ""} in network mk-bridge-880673: {Iface:virbr3 ExpiryTime:2025-10-14 21:18:07 +0000 UTC Type:0 Mac:52:54:00:21:00:20 Iaid: IPaddr:192.168.61.105 Prefix:24 Hostname:bridge-880673 Clientid:01:52:54:00:21:00:20}
	I1014 20:18:11.973729  421402 main.go:141] libmachine: (bridge-880673) DBG | domain bridge-880673 has defined IP address 192.168.61.105 and MAC address 52:54:00:21:00:20 in network mk-bridge-880673
	I1014 20:18:11.973951  421402 main.go:141] libmachine: (bridge-880673) Calling .GetSSHPort
	I1014 20:18:11.974158  421402 main.go:141] libmachine: (bridge-880673) Calling .GetSSHKeyPath
	I1014 20:18:11.974374  421402 main.go:141] libmachine: (bridge-880673) Calling .GetSSHKeyPath
	I1014 20:18:11.974517  421402 main.go:141] libmachine: (bridge-880673) Calling .GetSSHUsername
	I1014 20:18:11.974689  421402 main.go:141] libmachine: Using SSH client type: native
	I1014 20:18:11.975021  421402 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.61.105 22 <nil> <nil>}
	I1014 20:18:11.975038  421402 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1014 20:18:12.083424  421402 main.go:141] libmachine: SSH cmd err, output: <nil>: 1760473092.051952819
	
	I1014 20:18:12.083453  421402 fix.go:216] guest clock: 1760473092.051952819
	I1014 20:18:12.083464  421402 fix.go:229] Guest: 2025-10-14 20:18:12.051952819 +0000 UTC Remote: 2025-10-14 20:18:11.970250125 +0000 UTC m=+39.025245163 (delta=81.702694ms)
	I1014 20:18:12.083494  421402 fix.go:200] guest clock delta is within tolerance: 81.702694ms
	I1014 20:18:12.083512  421402 start.go:83] releasing machines lock for "bridge-880673", held for 20.469208293s
	I1014 20:18:12.083543  421402 main.go:141] libmachine: (bridge-880673) Calling .DriverName
	I1014 20:18:12.083972  421402 main.go:141] libmachine: (bridge-880673) Calling .GetIP
	I1014 20:18:12.087662  421402 main.go:141] libmachine: (bridge-880673) DBG | domain bridge-880673 has defined MAC address 52:54:00:21:00:20 in network mk-bridge-880673
	I1014 20:18:12.088178  421402 main.go:141] libmachine: (bridge-880673) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:00:20", ip: ""} in network mk-bridge-880673: {Iface:virbr3 ExpiryTime:2025-10-14 21:18:07 +0000 UTC Type:0 Mac:52:54:00:21:00:20 Iaid: IPaddr:192.168.61.105 Prefix:24 Hostname:bridge-880673 Clientid:01:52:54:00:21:00:20}
	I1014 20:18:12.088210  421402 main.go:141] libmachine: (bridge-880673) DBG | domain bridge-880673 has defined IP address 192.168.61.105 and MAC address 52:54:00:21:00:20 in network mk-bridge-880673
	I1014 20:18:12.088501  421402 main.go:141] libmachine: (bridge-880673) Calling .DriverName
	I1014 20:18:12.089069  421402 main.go:141] libmachine: (bridge-880673) Calling .DriverName
	I1014 20:18:12.089284  421402 main.go:141] libmachine: (bridge-880673) Calling .DriverName
	I1014 20:18:12.089444  421402 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1014 20:18:12.089492  421402 main.go:141] libmachine: (bridge-880673) Calling .GetSSHHostname
	I1014 20:18:12.089796  421402 ssh_runner.go:195] Run: cat /version.json
	I1014 20:18:12.089822  421402 main.go:141] libmachine: (bridge-880673) Calling .GetSSHHostname
	I1014 20:18:12.093687  421402 main.go:141] libmachine: (bridge-880673) DBG | domain bridge-880673 has defined MAC address 52:54:00:21:00:20 in network mk-bridge-880673
	I1014 20:18:12.093934  421402 main.go:141] libmachine: (bridge-880673) DBG | domain bridge-880673 has defined MAC address 52:54:00:21:00:20 in network mk-bridge-880673
	I1014 20:18:12.094119  421402 main.go:141] libmachine: (bridge-880673) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:00:20", ip: ""} in network mk-bridge-880673: {Iface:virbr3 ExpiryTime:2025-10-14 21:18:07 +0000 UTC Type:0 Mac:52:54:00:21:00:20 Iaid: IPaddr:192.168.61.105 Prefix:24 Hostname:bridge-880673 Clientid:01:52:54:00:21:00:20}
	I1014 20:18:12.094151  421402 main.go:141] libmachine: (bridge-880673) DBG | domain bridge-880673 has defined IP address 192.168.61.105 and MAC address 52:54:00:21:00:20 in network mk-bridge-880673
	I1014 20:18:12.094397  421402 main.go:141] libmachine: (bridge-880673) Calling .GetSSHPort
	I1014 20:18:12.094530  421402 main.go:141] libmachine: (bridge-880673) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:00:20", ip: ""} in network mk-bridge-880673: {Iface:virbr3 ExpiryTime:2025-10-14 21:18:07 +0000 UTC Type:0 Mac:52:54:00:21:00:20 Iaid: IPaddr:192.168.61.105 Prefix:24 Hostname:bridge-880673 Clientid:01:52:54:00:21:00:20}
	I1014 20:18:12.094555  421402 main.go:141] libmachine: (bridge-880673) DBG | domain bridge-880673 has defined IP address 192.168.61.105 and MAC address 52:54:00:21:00:20 in network mk-bridge-880673
	I1014 20:18:12.094628  421402 main.go:141] libmachine: (bridge-880673) Calling .GetSSHKeyPath
	I1014 20:18:12.094829  421402 main.go:141] libmachine: (bridge-880673) Calling .GetSSHUsername
	I1014 20:18:12.094921  421402 main.go:141] libmachine: (bridge-880673) Calling .GetSSHPort
	I1014 20:18:12.095102  421402 main.go:141] libmachine: (bridge-880673) Calling .GetSSHKeyPath
	I1014 20:18:12.095277  421402 main.go:141] libmachine: (bridge-880673) Calling .GetSSHUsername
	I1014 20:18:12.095286  421402 sshutil.go:53] new ssh client: &{IP:192.168.61.105 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21409-364627/.minikube/machines/bridge-880673/id_rsa Username:docker}
	I1014 20:18:12.095528  421402 sshutil.go:53] new ssh client: &{IP:192.168.61.105 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21409-364627/.minikube/machines/bridge-880673/id_rsa Username:docker}
	I1014 20:18:12.212043  421402 ssh_runner.go:195] Run: systemctl --version
	I1014 20:18:12.219839  421402 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1014 20:18:12.396741  421402 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1014 20:18:12.403858  421402 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1014 20:18:12.403971  421402 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1014 20:18:12.432004  421402 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1014 20:18:12.432033  421402 start.go:495] detecting cgroup driver to use...
	I1014 20:18:12.432099  421402 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1014 20:18:12.461465  421402 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1014 20:18:12.490577  421402 docker.go:218] disabling cri-docker service (if available) ...
	I1014 20:18:12.490671  421402 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1014 20:18:12.520721  421402 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1014 20:18:12.539982  421402 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1014 20:18:12.718589  421402 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1014 20:18:12.954522  421402 docker.go:234] disabling docker service ...
	I1014 20:18:12.954602  421402 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1014 20:18:12.974008  421402 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1014 20:18:12.992039  421402 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1014 20:18:13.176121  421402 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1014 20:18:13.329738  421402 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1014 20:18:13.350383  421402 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1014 20:18:13.379020  421402 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1014 20:18:13.379096  421402 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 20:18:13.393521  421402 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1014 20:18:13.393622  421402 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 20:18:13.408132  421402 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 20:18:13.424356  421402 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 20:18:13.438067  421402 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1014 20:18:13.454323  421402 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 20:18:13.468678  421402 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 20:18:13.492652  421402 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 20:18:13.506834  421402 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1014 20:18:13.518520  421402 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1014 20:18:13.518601  421402 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1014 20:18:13.539443  421402 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1014 20:18:13.553362  421402 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 20:18:13.714608  421402 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1014 20:18:13.849560  421402 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1014 20:18:13.849657  421402 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1014 20:18:13.856369  421402 start.go:563] Will wait 60s for crictl version
	I1014 20:18:13.856447  421402 ssh_runner.go:195] Run: which crictl
	I1014 20:18:13.861030  421402 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1014 20:18:13.908761  421402 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1014 20:18:13.908888  421402 ssh_runner.go:195] Run: crio --version
	I1014 20:18:13.943901  421402 ssh_runner.go:195] Run: crio --version
	I1014 20:18:13.977258  421402 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.29.1 ...
	I1014 20:18:11.554055  421087 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1014 20:18:11.560813  421087 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1014 20:18:11.560837  421087 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (4415 bytes)
	I1014 20:18:11.588535  421087 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1014 20:18:12.126533  421087 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1014 20:18:12.126608  421087 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 20:18:12.126681  421087 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes flannel-880673 minikube.k8s.io/updated_at=2025_10_14T20_18_12_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=93e78e130b0f4e4236c23940daf4ba8b68c76662 minikube.k8s.io/name=flannel-880673 minikube.k8s.io/primary=true
	I1014 20:18:12.315145  421087 ops.go:34] apiserver oom_adj: -16
	I1014 20:18:12.315188  421087 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 20:18:12.816025  421087 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 20:18:13.315604  421087 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 20:18:13.815525  421087 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 20:18:14.315435  421087 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 20:18:14.816250  421087 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 20:18:15.316274  421087 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 20:18:15.815903  421087 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 20:18:16.315968  421087 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 20:18:16.454604  421087 kubeadm.go:1113] duration metric: took 4.328060098s to wait for elevateKubeSystemPrivileges
	I1014 20:18:16.454643  421087 kubeadm.go:402] duration metric: took 17.541607536s to StartCluster
	I1014 20:18:16.454664  421087 settings.go:142] acquiring lock: {Name:mkb488b5c777750ffd68a70b951fb5c68c216ad7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 20:18:16.454735  421087 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21409-364627/kubeconfig
	I1014 20:18:16.456623  421087 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-364627/kubeconfig: {Name:mkd77480770143eefcec102bea219ed6716fd3f5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 20:18:16.456921  421087 start.go:235] Will wait 15m0s for node &{Name: IP:192.168.39.78 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1014 20:18:16.457029  421087 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1014 20:18:16.457329  421087 config.go:182] Loaded profile config "flannel-880673": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1014 20:18:16.457372  421087 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1014 20:18:16.457439  421087 addons.go:69] Setting storage-provisioner=true in profile "flannel-880673"
	I1014 20:18:16.457455  421087 addons.go:238] Setting addon storage-provisioner=true in "flannel-880673"
	I1014 20:18:16.457481  421087 host.go:66] Checking if "flannel-880673" exists ...
	I1014 20:18:16.457858  421087 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 20:18:16.457879  421087 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 20:18:16.457981  421087 addons.go:69] Setting default-storageclass=true in profile "flannel-880673"
	I1014 20:18:16.458001  421087 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "flannel-880673"
	I1014 20:18:16.458307  421087 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 20:18:16.458363  421087 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 20:18:16.463489  421087 out.go:179] * Verifying Kubernetes components...
	I1014 20:18:16.465079  421087 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 20:18:16.478450  421087 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45339
	I1014 20:18:16.478459  421087 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39093
	I1014 20:18:16.479191  421087 main.go:141] libmachine: () Calling .GetVersion
	I1014 20:18:16.479396  421087 main.go:141] libmachine: () Calling .GetVersion
	I1014 20:18:16.479983  421087 main.go:141] libmachine: Using API Version  1
	I1014 20:18:16.480011  421087 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 20:18:16.480161  421087 main.go:141] libmachine: Using API Version  1
	I1014 20:18:16.480183  421087 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 20:18:16.480477  421087 main.go:141] libmachine: () Calling .GetMachineName
	I1014 20:18:16.480791  421087 main.go:141] libmachine: () Calling .GetMachineName
	I1014 20:18:16.481287  421087 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 20:18:16.481344  421087 main.go:141] libmachine: (flannel-880673) Calling .GetState
	I1014 20:18:16.481500  421087 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 20:18:16.491227  421087 addons.go:238] Setting addon default-storageclass=true in "flannel-880673"
	I1014 20:18:16.491426  421087 host.go:66] Checking if "flannel-880673" exists ...
	I1014 20:18:16.491952  421087 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 20:18:16.492089  421087 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 20:18:16.505488  421087 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37335
	I1014 20:18:16.507142  421087 main.go:141] libmachine: () Calling .GetVersion
	I1014 20:18:16.507763  421087 main.go:141] libmachine: Using API Version  1
	I1014 20:18:16.507785  421087 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 20:18:16.508447  421087 main.go:141] libmachine: () Calling .GetMachineName
	I1014 20:18:16.508733  421087 main.go:141] libmachine: (flannel-880673) Calling .GetState
	I1014 20:18:16.512174  421087 main.go:141] libmachine: (flannel-880673) Calling .DriverName
	I1014 20:18:16.516578  421087 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	W1014 20:18:12.636730  418230 pod_ready.go:104] pod "coredns-66bc5c9577-489jr" is not "Ready", error: <nil>
	W1014 20:18:14.638888  418230 pod_ready.go:104] pod "coredns-66bc5c9577-489jr" is not "Ready", error: <nil>
	W1014 20:18:16.641600  418230 pod_ready.go:104] pod "coredns-66bc5c9577-489jr" is not "Ready", error: <nil>
	I1014 20:18:13.978473  421402 main.go:141] libmachine: (bridge-880673) Calling .GetIP
	I1014 20:18:13.981686  421402 main.go:141] libmachine: (bridge-880673) DBG | domain bridge-880673 has defined MAC address 52:54:00:21:00:20 in network mk-bridge-880673
	I1014 20:18:13.982133  421402 main.go:141] libmachine: (bridge-880673) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:00:20", ip: ""} in network mk-bridge-880673: {Iface:virbr3 ExpiryTime:2025-10-14 21:18:07 +0000 UTC Type:0 Mac:52:54:00:21:00:20 Iaid: IPaddr:192.168.61.105 Prefix:24 Hostname:bridge-880673 Clientid:01:52:54:00:21:00:20}
	I1014 20:18:13.982173  421402 main.go:141] libmachine: (bridge-880673) DBG | domain bridge-880673 has defined IP address 192.168.61.105 and MAC address 52:54:00:21:00:20 in network mk-bridge-880673
	I1014 20:18:13.982419  421402 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I1014 20:18:13.987171  421402 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1014 20:18:14.003746  421402 kubeadm.go:883] updating cluster {Name:bridge-880673 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.
1 ClusterName:bridge-880673 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP:192.168.61.105 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirr
or: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1014 20:18:14.003909  421402 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1014 20:18:14.003984  421402 ssh_runner.go:195] Run: sudo crictl images --output json
	I1014 20:18:14.045704  421402 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.34.1". assuming images are not preloaded.
	I1014 20:18:14.045797  421402 ssh_runner.go:195] Run: which lz4
	I1014 20:18:14.050596  421402 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1014 20:18:14.055602  421402 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1014 20:18:14.055637  421402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-364627/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (409477533 bytes)
	I1014 20:18:15.683347  421402 crio.go:462] duration metric: took 1.632759736s to copy over tarball
	I1014 20:18:15.683458  421402 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1014 20:18:17.734331  421402 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.050821196s)
	I1014 20:18:17.734369  421402 crio.go:469] duration metric: took 2.050979566s to extract the tarball
	I1014 20:18:17.734381  421402 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1014 20:18:17.779609  421402 ssh_runner.go:195] Run: sudo crictl images --output json
	I1014 20:18:17.833745  421402 crio.go:514] all images are preloaded for cri-o runtime.
	I1014 20:18:17.833783  421402 cache_images.go:85] Images are preloaded, skipping loading
	I1014 20:18:17.833794  421402 kubeadm.go:934] updating node { 192.168.61.105 8443 v1.34.1 crio true true} ...
	I1014 20:18:17.833949  421402 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=bridge-880673 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.105
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:bridge-880673 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge}
	I1014 20:18:17.834056  421402 ssh_runner.go:195] Run: crio config
	I1014 20:18:17.902568  421402 cni.go:84] Creating CNI manager for "bridge"
	I1014 20:18:17.902614  421402 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1014 20:18:17.902643  421402 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.105 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:bridge-880673 NodeName:bridge-880673 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.105"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.105 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1014 20:18:17.902870  421402 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.105
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "bridge-880673"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.61.105"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.105"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1014 20:18:17.902951  421402 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1014 20:18:17.916601  421402 binaries.go:44] Found k8s binaries, skipping transfer
	I1014 20:18:17.916685  421402 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1014 20:18:17.931432  421402 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I1014 20:18:17.955257  421402 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1014 20:18:17.976773  421402 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2216 bytes)
	I1014 20:18:16.518868  421087 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43467
	I1014 20:18:16.519156  421087 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1014 20:18:16.519189  421087 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1014 20:18:16.519216  421087 main.go:141] libmachine: (flannel-880673) Calling .GetSSHHostname
	I1014 20:18:16.519568  421087 main.go:141] libmachine: () Calling .GetVersion
	I1014 20:18:16.520256  421087 main.go:141] libmachine: Using API Version  1
	I1014 20:18:16.520366  421087 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 20:18:16.520848  421087 main.go:141] libmachine: () Calling .GetMachineName
	I1014 20:18:16.521723  421087 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 20:18:16.521776  421087 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 20:18:16.524743  421087 main.go:141] libmachine: (flannel-880673) DBG | domain flannel-880673 has defined MAC address 52:54:00:d6:0d:31 in network mk-flannel-880673
	I1014 20:18:16.525376  421087 main.go:141] libmachine: (flannel-880673) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:0d:31", ip: ""} in network mk-flannel-880673: {Iface:virbr2 ExpiryTime:2025-10-14 21:17:46 +0000 UTC Type:0 Mac:52:54:00:d6:0d:31 Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:flannel-880673 Clientid:01:52:54:00:d6:0d:31}
	I1014 20:18:16.525430  421087 main.go:141] libmachine: (flannel-880673) DBG | domain flannel-880673 has defined IP address 192.168.39.78 and MAC address 52:54:00:d6:0d:31 in network mk-flannel-880673
	I1014 20:18:16.525459  421087 main.go:141] libmachine: (flannel-880673) Calling .GetSSHPort
	I1014 20:18:16.525706  421087 main.go:141] libmachine: (flannel-880673) Calling .GetSSHKeyPath
	I1014 20:18:16.525936  421087 main.go:141] libmachine: (flannel-880673) Calling .GetSSHUsername
	I1014 20:18:16.526249  421087 sshutil.go:53] new ssh client: &{IP:192.168.39.78 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21409-364627/.minikube/machines/flannel-880673/id_rsa Username:docker}
	I1014 20:18:16.543071  421087 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38609
	I1014 20:18:16.543723  421087 main.go:141] libmachine: () Calling .GetVersion
	I1014 20:18:16.544379  421087 main.go:141] libmachine: Using API Version  1
	I1014 20:18:16.544416  421087 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 20:18:16.544902  421087 main.go:141] libmachine: () Calling .GetMachineName
	I1014 20:18:16.545161  421087 main.go:141] libmachine: (flannel-880673) Calling .GetState
	I1014 20:18:16.547746  421087 main.go:141] libmachine: (flannel-880673) Calling .DriverName
	I1014 20:18:16.547986  421087 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1014 20:18:16.548005  421087 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1014 20:18:16.548027  421087 main.go:141] libmachine: (flannel-880673) Calling .GetSSHHostname
	I1014 20:18:16.552546  421087 main.go:141] libmachine: (flannel-880673) DBG | domain flannel-880673 has defined MAC address 52:54:00:d6:0d:31 in network mk-flannel-880673
	I1014 20:18:16.553146  421087 main.go:141] libmachine: (flannel-880673) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:0d:31", ip: ""} in network mk-flannel-880673: {Iface:virbr2 ExpiryTime:2025-10-14 21:17:46 +0000 UTC Type:0 Mac:52:54:00:d6:0d:31 Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:flannel-880673 Clientid:01:52:54:00:d6:0d:31}
	I1014 20:18:16.553179  421087 main.go:141] libmachine: (flannel-880673) DBG | domain flannel-880673 has defined IP address 192.168.39.78 and MAC address 52:54:00:d6:0d:31 in network mk-flannel-880673
	I1014 20:18:16.553702  421087 main.go:141] libmachine: (flannel-880673) Calling .GetSSHPort
	I1014 20:18:16.553918  421087 main.go:141] libmachine: (flannel-880673) Calling .GetSSHKeyPath
	I1014 20:18:16.554145  421087 main.go:141] libmachine: (flannel-880673) Calling .GetSSHUsername
	I1014 20:18:16.554417  421087 sshutil.go:53] new ssh client: &{IP:192.168.39.78 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21409-364627/.minikube/machines/flannel-880673/id_rsa Username:docker}
	I1014 20:18:16.692451  421087 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1014 20:18:16.853395  421087 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1014 20:18:17.217753  421087 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1014 20:18:17.236857  421087 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1014 20:18:17.687457  421087 start.go:976] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1014 20:18:17.687804  421087 main.go:141] libmachine: Making call to close driver server
	I1014 20:18:17.687826  421087 main.go:141] libmachine: (flannel-880673) Calling .Close
	I1014 20:18:17.688344  421087 main.go:141] libmachine: (flannel-880673) DBG | Closing plugin on server side
	I1014 20:18:17.688399  421087 main.go:141] libmachine: Successfully made call to close driver server
	I1014 20:18:17.688407  421087 main.go:141] libmachine: Making call to close connection to plugin binary
	I1014 20:18:17.688417  421087 main.go:141] libmachine: Making call to close driver server
	I1014 20:18:17.688426  421087 main.go:141] libmachine: (flannel-880673) Calling .Close
	I1014 20:18:17.688755  421087 main.go:141] libmachine: Successfully made call to close driver server
	I1014 20:18:17.688770  421087 main.go:141] libmachine: Making call to close connection to plugin binary
	I1014 20:18:17.689423  421087 node_ready.go:35] waiting up to 15m0s for node "flannel-880673" to be "Ready" ...
	I1014 20:18:17.718251  421087 main.go:141] libmachine: Making call to close driver server
	I1014 20:18:17.718276  421087 main.go:141] libmachine: (flannel-880673) Calling .Close
	I1014 20:18:17.718584  421087 main.go:141] libmachine: Successfully made call to close driver server
	I1014 20:18:17.718604  421087 main.go:141] libmachine: Making call to close connection to plugin binary
	I1014 20:18:18.003907  421087 main.go:141] libmachine: Making call to close driver server
	I1014 20:18:18.003930  421087 main.go:141] libmachine: (flannel-880673) Calling .Close
	I1014 20:18:18.004266  421087 main.go:141] libmachine: Successfully made call to close driver server
	I1014 20:18:18.004288  421087 main.go:141] libmachine: Making call to close connection to plugin binary
	I1014 20:18:18.004304  421087 main.go:141] libmachine: Making call to close driver server
	I1014 20:18:18.004324  421087 main.go:141] libmachine: (flannel-880673) Calling .Close
	I1014 20:18:18.004625  421087 main.go:141] libmachine: (flannel-880673) DBG | Closing plugin on server side
	I1014 20:18:18.004672  421087 main.go:141] libmachine: Successfully made call to close driver server
	I1014 20:18:18.004689  421087 main.go:141] libmachine: Making call to close connection to plugin binary
	I1014 20:18:18.006767  421087 out.go:179] * Enabled addons: default-storageclass, storage-provisioner
	I1014 20:18:18.001420  421402 ssh_runner.go:195] Run: grep 192.168.61.105	control-plane.minikube.internal$ /etc/hosts
	I1014 20:18:18.006728  421402 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.105	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1014 20:18:18.023574  421402 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 20:18:18.187741  421402 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1014 20:18:18.228788  421402 certs.go:69] Setting up /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/bridge-880673 for IP: 192.168.61.105
	I1014 20:18:18.228812  421402 certs.go:195] generating shared ca certs ...
	I1014 20:18:18.228834  421402 certs.go:227] acquiring lock for ca certs: {Name:mkddeaa8fb7f14aff32554669329c3967650976a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 20:18:18.228995  421402 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21409-364627/.minikube/ca.key
	I1014 20:18:18.229040  421402 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21409-364627/.minikube/proxy-client-ca.key
	I1014 20:18:18.229047  421402 certs.go:257] generating profile certs ...
	I1014 20:18:18.229096  421402 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/bridge-880673/client.key
	I1014 20:18:18.229110  421402 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/bridge-880673/client.crt with IP's: []
	I1014 20:18:18.398166  421402 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/bridge-880673/client.crt ...
	I1014 20:18:18.398200  421402 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/bridge-880673/client.crt: {Name:mk595ad0b234ff7452ec47aa1d9be0f57df00f3b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 20:18:18.398397  421402 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/bridge-880673/client.key ...
	I1014 20:18:18.398414  421402 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/bridge-880673/client.key: {Name:mk2a01d027ec022340d98e24a988207f5bf3eecc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 20:18:18.398551  421402 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/bridge-880673/apiserver.key.9bc94a14
	I1014 20:18:18.398571  421402 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/bridge-880673/apiserver.crt.9bc94a14 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.61.105]
	I1014 20:18:18.722080  421402 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/bridge-880673/apiserver.crt.9bc94a14 ...
	I1014 20:18:18.722114  421402 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/bridge-880673/apiserver.crt.9bc94a14: {Name:mk73c594db5b49ecd1f5ae89daf3677a9c0b1176 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 20:18:18.722308  421402 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/bridge-880673/apiserver.key.9bc94a14 ...
	I1014 20:18:18.722348  421402 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/bridge-880673/apiserver.key.9bc94a14: {Name:mk256bd645292252d9623f1c66667da60f375e93 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 20:18:18.722451  421402 certs.go:382] copying /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/bridge-880673/apiserver.crt.9bc94a14 -> /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/bridge-880673/apiserver.crt
	I1014 20:18:18.722550  421402 certs.go:386] copying /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/bridge-880673/apiserver.key.9bc94a14 -> /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/bridge-880673/apiserver.key
	I1014 20:18:18.722623  421402 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/bridge-880673/proxy-client.key
	I1014 20:18:18.722639  421402 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/bridge-880673/proxy-client.crt with IP's: []
	I1014 20:18:18.952984  421402 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/bridge-880673/proxy-client.crt ...
	I1014 20:18:18.953017  421402 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/bridge-880673/proxy-client.crt: {Name:mk32032be198c8c46cdac767e584ac6bc5628c2a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 20:18:18.953215  421402 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/bridge-880673/proxy-client.key ...
	I1014 20:18:18.953231  421402 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/bridge-880673/proxy-client.key: {Name:mk61cb8addb3b895a4ab57106477a1490ec60125 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 20:18:18.953815  421402 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-364627/.minikube/certs/368634.pem (1338 bytes)
	W1014 20:18:18.953892  421402 certs.go:480] ignoring /home/jenkins/minikube-integration/21409-364627/.minikube/certs/368634_empty.pem, impossibly tiny 0 bytes
	I1014 20:18:18.953904  421402 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-364627/.minikube/certs/ca-key.pem (1675 bytes)
	I1014 20:18:18.953953  421402 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-364627/.minikube/certs/ca.pem (1082 bytes)
	I1014 20:18:18.953988  421402 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-364627/.minikube/certs/cert.pem (1123 bytes)
	I1014 20:18:18.954014  421402 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-364627/.minikube/certs/key.pem (1675 bytes)
	I1014 20:18:18.954063  421402 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-364627/.minikube/files/etc/ssl/certs/3686342.pem (1708 bytes)
	I1014 20:18:18.955513  421402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-364627/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1014 20:18:19.005900  421402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-364627/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1014 20:18:19.041417  421402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-364627/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1014 20:18:19.072123  421402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-364627/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1014 20:18:19.106421  421402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/bridge-880673/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1014 20:18:19.138409  421402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/bridge-880673/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1014 20:18:19.170789  421402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/bridge-880673/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1014 20:18:19.202103  421402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/bridge-880673/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1014 20:18:19.235753  421402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-364627/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1014 20:18:19.268088  421402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-364627/.minikube/certs/368634.pem --> /usr/share/ca-certificates/368634.pem (1338 bytes)
	I1014 20:18:19.298019  421402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-364627/.minikube/files/etc/ssl/certs/3686342.pem --> /usr/share/ca-certificates/3686342.pem (1708 bytes)
	I1014 20:18:19.328880  421402 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1014 20:18:19.352044  421402 ssh_runner.go:195] Run: openssl version
	I1014 20:18:19.359404  421402 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1014 20:18:19.372942  421402 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1014 20:18:19.379120  421402 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 14 19:11 /usr/share/ca-certificates/minikubeCA.pem
	I1014 20:18:19.379193  421402 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1014 20:18:19.386935  421402 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1014 20:18:19.401507  421402 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/368634.pem && ln -fs /usr/share/ca-certificates/368634.pem /etc/ssl/certs/368634.pem"
	I1014 20:18:19.415930  421402 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/368634.pem
	I1014 20:18:19.421263  421402 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 14 19:18 /usr/share/ca-certificates/368634.pem
	I1014 20:18:19.421350  421402 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/368634.pem
	I1014 20:18:19.428985  421402 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/368634.pem /etc/ssl/certs/51391683.0"
	I1014 20:18:19.443684  421402 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3686342.pem && ln -fs /usr/share/ca-certificates/3686342.pem /etc/ssl/certs/3686342.pem"
	I1014 20:18:19.457617  421402 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3686342.pem
	I1014 20:18:19.463583  421402 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 14 19:18 /usr/share/ca-certificates/3686342.pem
	I1014 20:18:19.463673  421402 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3686342.pem
	I1014 20:18:19.471173  421402 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3686342.pem /etc/ssl/certs/3ec20f2e.0"
	I1014 20:18:19.486280  421402 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1014 20:18:19.491757  421402 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1014 20:18:19.491833  421402 kubeadm.go:400] StartCluster: {Name:bridge-880673 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 C
lusterName:bridge-880673 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP:192.168.61.105 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1014 20:18:19.491916  421402 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1014 20:18:19.491967  421402 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1014 20:18:19.533746  421402 cri.go:89] found id: ""
	I1014 20:18:19.533842  421402 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1014 20:18:19.546535  421402 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1014 20:18:19.558793  421402 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1014 20:18:19.571345  421402 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1014 20:18:19.571367  421402 kubeadm.go:157] found existing configuration files:
	
	I1014 20:18:19.571414  421402 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1014 20:18:19.582436  421402 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1014 20:18:19.582513  421402 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1014 20:18:19.595295  421402 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1014 20:18:19.606706  421402 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1014 20:18:19.606792  421402 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1014 20:18:19.619650  421402 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1014 20:18:19.631416  421402 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1014 20:18:19.631489  421402 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1014 20:18:19.647609  421402 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1014 20:18:19.660158  421402 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1014 20:18:19.660231  421402 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1014 20:18:19.672633  421402 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1014 20:18:19.734127  421402 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1014 20:18:19.734930  421402 kubeadm.go:318] [preflight] Running pre-flight checks
	I1014 20:18:19.834485  421402 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1014 20:18:19.834705  421402 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1014 20:18:19.834838  421402 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1014 20:18:19.845500  421402 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1014 20:18:18.011120  421087 addons.go:514] duration metric: took 1.553739803s for enable addons: enabled=[default-storageclass storage-provisioner]
	I1014 20:18:18.194030  421087 kapi.go:214] "coredns" deployment in "kube-system" namespace and "flannel-880673" context rescaled to 1 replicas
	W1014 20:18:19.694434  421087 node_ready.go:57] node "flannel-880673" has "Ready":"False" status (will retry)
	W1014 20:18:19.135894  418230 pod_ready.go:104] pod "coredns-66bc5c9577-489jr" is not "Ready", error: <nil>
	W1014 20:18:21.135981  418230 pod_ready.go:104] pod "coredns-66bc5c9577-489jr" is not "Ready", error: <nil>
	I1014 20:18:20.019215  421402 out.go:252]   - Generating certificates and keys ...
	I1014 20:18:20.019364  421402 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1014 20:18:20.019450  421402 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1014 20:18:20.019568  421402 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1014 20:18:20.423172  421402 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1014 20:18:20.565417  421402 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1014 20:18:20.831800  421402 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1014 20:18:20.908719  421402 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1014 20:18:20.908947  421402 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [bridge-880673 localhost] and IPs [192.168.61.105 127.0.0.1 ::1]
	I1014 20:18:21.163287  421402 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1014 20:18:21.163517  421402 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [bridge-880673 localhost] and IPs [192.168.61.105 127.0.0.1 ::1]
	I1014 20:18:21.612720  421402 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1014 20:18:21.653202  421402 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1014 20:18:21.915336  421402 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1014 20:18:21.915538  421402 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1014 20:18:22.075178  421402 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1014 20:18:22.517322  421402 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1014 20:18:22.836783  421402 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1014 20:18:23.000293  421402 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1014 20:18:23.290416  421402 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1014 20:18:23.290921  421402 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1014 20:18:23.293499  421402 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1014 20:18:22.138302  418230 pod_ready.go:94] pod "coredns-66bc5c9577-489jr" is "Ready"
	I1014 20:18:22.138381  418230 pod_ready.go:86] duration metric: took 36.508591303s for pod "coredns-66bc5c9577-489jr" in "kube-system" namespace to be "Ready" or be gone ...
	I1014 20:18:22.142498  418230 pod_ready.go:83] waiting for pod "etcd-enable-default-cni-880673" in "kube-system" namespace to be "Ready" or be gone ...
	I1014 20:18:22.147755  418230 pod_ready.go:94] pod "etcd-enable-default-cni-880673" is "Ready"
	I1014 20:18:22.147785  418230 pod_ready.go:86] duration metric: took 5.253572ms for pod "etcd-enable-default-cni-880673" in "kube-system" namespace to be "Ready" or be gone ...
	I1014 20:18:22.150369  418230 pod_ready.go:83] waiting for pod "kube-apiserver-enable-default-cni-880673" in "kube-system" namespace to be "Ready" or be gone ...
	I1014 20:18:22.155001  418230 pod_ready.go:94] pod "kube-apiserver-enable-default-cni-880673" is "Ready"
	I1014 20:18:22.155028  418230 pod_ready.go:86] duration metric: took 4.637349ms for pod "kube-apiserver-enable-default-cni-880673" in "kube-system" namespace to be "Ready" or be gone ...
	I1014 20:18:22.158035  418230 pod_ready.go:83] waiting for pod "kube-controller-manager-enable-default-cni-880673" in "kube-system" namespace to be "Ready" or be gone ...
	I1014 20:18:22.335464  418230 pod_ready.go:94] pod "kube-controller-manager-enable-default-cni-880673" is "Ready"
	I1014 20:18:22.335500  418230 pod_ready.go:86] duration metric: took 177.43826ms for pod "kube-controller-manager-enable-default-cni-880673" in "kube-system" namespace to be "Ready" or be gone ...
	I1014 20:18:22.535176  418230 pod_ready.go:83] waiting for pod "kube-proxy-qm5zb" in "kube-system" namespace to be "Ready" or be gone ...
	I1014 20:18:22.935782  418230 pod_ready.go:94] pod "kube-proxy-qm5zb" is "Ready"
	I1014 20:18:22.935816  418230 pod_ready.go:86] duration metric: took 400.604632ms for pod "kube-proxy-qm5zb" in "kube-system" namespace to be "Ready" or be gone ...
	I1014 20:18:23.135483  418230 pod_ready.go:83] waiting for pod "kube-scheduler-enable-default-cni-880673" in "kube-system" namespace to be "Ready" or be gone ...
	I1014 20:18:23.536595  418230 pod_ready.go:94] pod "kube-scheduler-enable-default-cni-880673" is "Ready"
	I1014 20:18:23.536634  418230 pod_ready.go:86] duration metric: took 401.119182ms for pod "kube-scheduler-enable-default-cni-880673" in "kube-system" namespace to be "Ready" or be gone ...
	I1014 20:18:23.536650  418230 pod_ready.go:40] duration metric: took 37.911908956s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1014 20:18:23.584605  418230 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1014 20:18:23.587307  418230 out.go:179] * Done! kubectl is now configured to use "enable-default-cni-880673" cluster and "default" namespace by default
	W1014 20:18:23.592178  418230 root.go:91] failed to log command end to audit: failed to find a log row with id equals to 70a75ecc-5e4e-4ac8-9720-1b3d7c8fcb5b
	W1014 20:18:22.193864  421087 node_ready.go:57] node "flannel-880673" has "Ready":"False" status (will retry)
	W1014 20:18:24.694222  421087 node_ready.go:57] node "flannel-880673" has "Ready":"False" status (will retry)
	I1014 20:18:23.295285  421402 out.go:252]   - Booting up control plane ...
	I1014 20:18:23.295424  421402 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1014 20:18:23.295536  421402 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1014 20:18:23.295633  421402 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1014 20:18:23.320806  421402 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1014 20:18:23.320994  421402 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1014 20:18:23.329177  421402 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1014 20:18:23.329279  421402 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1014 20:18:23.330288  421402 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1014 20:18:23.529018  421402 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1014 20:18:23.529195  421402 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1014 20:18:24.530359  421402 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.001885541s
	I1014 20:18:24.535053  421402 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1014 20:18:24.535202  421402 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.61.105:8443/livez
	I1014 20:18:24.535336  421402 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1014 20:18:24.535453  421402 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	W1014 20:18:26.694542  421087 node_ready.go:57] node "flannel-880673" has "Ready":"False" status (will retry)
	W1014 20:18:29.194905  421087 node_ready.go:57] node "flannel-880673" has "Ready":"False" status (will retry)
	I1014 20:18:28.543638  421402 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 4.010215569s
	I1014 20:18:29.650178  421402 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 5.117608499s
	I1014 20:18:31.034085  421402 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 6.501687487s
	I1014 20:18:31.053112  421402 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1014 20:18:31.082721  421402 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1014 20:18:31.110576  421402 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1014 20:18:31.110938  421402 kubeadm.go:318] [mark-control-plane] Marking the node bridge-880673 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1014 20:18:31.135025  421402 kubeadm.go:318] [bootstrap-token] Using token: toe6ef.s59wh81d0jyqrdao
	I1014 20:18:31.136116  421402 out.go:252]   - Configuring RBAC rules ...
	I1014 20:18:31.136263  421402 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1014 20:18:31.150468  421402 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1014 20:18:31.166893  421402 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1014 20:18:31.173525  421402 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1014 20:18:31.180987  421402 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1014 20:18:31.190512  421402 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1014 20:18:31.446174  421402 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1014 20:18:31.914059  421402 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1014 20:18:32.444243  421402 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1014 20:18:32.445280  421402 kubeadm.go:318] 
	I1014 20:18:32.445407  421402 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1014 20:18:32.445432  421402 kubeadm.go:318] 
	I1014 20:18:32.445533  421402 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1014 20:18:32.445545  421402 kubeadm.go:318] 
	I1014 20:18:32.445582  421402 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1014 20:18:32.445669  421402 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1014 20:18:32.445737  421402 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1014 20:18:32.445745  421402 kubeadm.go:318] 
	I1014 20:18:32.445817  421402 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1014 20:18:32.445826  421402 kubeadm.go:318] 
	I1014 20:18:32.445892  421402 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1014 20:18:32.445902  421402 kubeadm.go:318] 
	I1014 20:18:32.445963  421402 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1014 20:18:32.446060  421402 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1014 20:18:32.446157  421402 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1014 20:18:32.446168  421402 kubeadm.go:318] 
	I1014 20:18:32.446298  421402 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1014 20:18:32.446440  421402 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1014 20:18:32.446452  421402 kubeadm.go:318] 
	I1014 20:18:32.446585  421402 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token toe6ef.s59wh81d0jyqrdao \
	I1014 20:18:32.446682  421402 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:a62c4f11982899337eae6d2ac06abdf2a9e3ffb256514381b9172598c708cb1d \
	I1014 20:18:32.446716  421402 kubeadm.go:318] 	--control-plane 
	I1014 20:18:32.446728  421402 kubeadm.go:318] 
	I1014 20:18:32.446870  421402 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1014 20:18:32.446882  421402 kubeadm.go:318] 
	I1014 20:18:32.446998  421402 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token toe6ef.s59wh81d0jyqrdao \
	I1014 20:18:32.447153  421402 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:a62c4f11982899337eae6d2ac06abdf2a9e3ffb256514381b9172598c708cb1d 
	I1014 20:18:32.448453  421402 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1014 20:18:32.448489  421402 cni.go:84] Creating CNI manager for "bridge"
	I1014 20:18:32.450052  421402 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I1014 20:18:32.451634  421402 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1014 20:18:32.474124  421402 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1014 20:18:32.499680  421402 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1014 20:18:32.499764  421402 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 20:18:32.499785  421402 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes bridge-880673 minikube.k8s.io/updated_at=2025_10_14T20_18_32_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=93e78e130b0f4e4236c23940daf4ba8b68c76662 minikube.k8s.io/name=bridge-880673 minikube.k8s.io/primary=true
	I1014 20:18:32.645374  421402 ops.go:34] apiserver oom_adj: -16
	I1014 20:18:32.645491  421402 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 20:18:30.198516  421087 node_ready.go:49] node "flannel-880673" is "Ready"
	I1014 20:18:30.198554  421087 node_ready.go:38] duration metric: took 12.509099505s for node "flannel-880673" to be "Ready" ...
	I1014 20:18:30.198569  421087 api_server.go:52] waiting for apiserver process to appear ...
	I1014 20:18:30.198637  421087 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 20:18:30.308625  421087 api_server.go:72] duration metric: took 13.851661115s to wait for apiserver process to appear ...
	I1014 20:18:30.308663  421087 api_server.go:88] waiting for apiserver healthz status ...
	I1014 20:18:30.308691  421087 api_server.go:253] Checking apiserver healthz at https://192.168.39.78:8443/healthz ...
	I1014 20:18:30.323576  421087 api_server.go:279] https://192.168.39.78:8443/healthz returned 200:
	ok
	I1014 20:18:30.326433  421087 api_server.go:141] control plane version: v1.34.1
	I1014 20:18:30.326470  421087 api_server.go:131] duration metric: took 17.796983ms to wait for apiserver health ...
	I1014 20:18:30.326481  421087 system_pods.go:43] waiting for kube-system pods to appear ...
	I1014 20:18:30.348628  421087 system_pods.go:59] 7 kube-system pods found
	I1014 20:18:30.348690  421087 system_pods.go:61] "coredns-66bc5c9577-t5q7c" [fd900037-32a6-4811-8a8c-134147706022] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1014 20:18:30.348704  421087 system_pods.go:61] "etcd-flannel-880673" [7a1195b6-1bf1-40a7-9aa9-cc2d5dafe2b9] Running
	I1014 20:18:30.348713  421087 system_pods.go:61] "kube-apiserver-flannel-880673" [2ab1b350-0f52-49dd-8282-546cd96104f0] Running
	I1014 20:18:30.348720  421087 system_pods.go:61] "kube-controller-manager-flannel-880673" [77718da6-91bf-4c39-936e-5acc7b6afb65] Running
	I1014 20:18:30.348726  421087 system_pods.go:61] "kube-proxy-js9r5" [f076c7d6-4d12-413f-bc46-f144270c72b2] Running
	I1014 20:18:30.348732  421087 system_pods.go:61] "kube-scheduler-flannel-880673" [3b779757-7765-4ed7-a710-794d57550f16] Running
	I1014 20:18:30.348745  421087 system_pods.go:61] "storage-provisioner" [3623d0c1-d2f8-45e3-b0f3-d09008f2fbc8] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1014 20:18:30.348756  421087 system_pods.go:74] duration metric: took 22.265823ms to wait for pod list to return data ...
	I1014 20:18:30.348774  421087 default_sa.go:34] waiting for default service account to be created ...
	I1014 20:18:30.366984  421087 default_sa.go:45] found service account: "default"
	I1014 20:18:30.367018  421087 default_sa.go:55] duration metric: took 18.234312ms for default service account to be created ...
	I1014 20:18:30.367034  421087 system_pods.go:116] waiting for k8s-apps to be running ...
	I1014 20:18:30.448504  421087 system_pods.go:86] 7 kube-system pods found
	I1014 20:18:30.448544  421087 system_pods.go:89] "coredns-66bc5c9577-t5q7c" [fd900037-32a6-4811-8a8c-134147706022] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1014 20:18:30.448552  421087 system_pods.go:89] "etcd-flannel-880673" [7a1195b6-1bf1-40a7-9aa9-cc2d5dafe2b9] Running
	I1014 20:18:30.448577  421087 system_pods.go:89] "kube-apiserver-flannel-880673" [2ab1b350-0f52-49dd-8282-546cd96104f0] Running
	I1014 20:18:30.448583  421087 system_pods.go:89] "kube-controller-manager-flannel-880673" [77718da6-91bf-4c39-936e-5acc7b6afb65] Running
	I1014 20:18:30.448589  421087 system_pods.go:89] "kube-proxy-js9r5" [f076c7d6-4d12-413f-bc46-f144270c72b2] Running
	I1014 20:18:30.448596  421087 system_pods.go:89] "kube-scheduler-flannel-880673" [3b779757-7765-4ed7-a710-794d57550f16] Running
	I1014 20:18:30.448605  421087 system_pods.go:89] "storage-provisioner" [3623d0c1-d2f8-45e3-b0f3-d09008f2fbc8] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1014 20:18:30.448654  421087 retry.go:31] will retry after 195.575996ms: missing components: kube-dns
	I1014 20:18:30.745034  421087 system_pods.go:86] 7 kube-system pods found
	I1014 20:18:30.745071  421087 system_pods.go:89] "coredns-66bc5c9577-t5q7c" [fd900037-32a6-4811-8a8c-134147706022] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1014 20:18:30.745077  421087 system_pods.go:89] "etcd-flannel-880673" [7a1195b6-1bf1-40a7-9aa9-cc2d5dafe2b9] Running
	I1014 20:18:30.745082  421087 system_pods.go:89] "kube-apiserver-flannel-880673" [2ab1b350-0f52-49dd-8282-546cd96104f0] Running
	I1014 20:18:30.745087  421087 system_pods.go:89] "kube-controller-manager-flannel-880673" [77718da6-91bf-4c39-936e-5acc7b6afb65] Running
	I1014 20:18:30.745090  421087 system_pods.go:89] "kube-proxy-js9r5" [f076c7d6-4d12-413f-bc46-f144270c72b2] Running
	I1014 20:18:30.745093  421087 system_pods.go:89] "kube-scheduler-flannel-880673" [3b779757-7765-4ed7-a710-794d57550f16] Running
	I1014 20:18:30.745098  421087 system_pods.go:89] "storage-provisioner" [3623d0c1-d2f8-45e3-b0f3-d09008f2fbc8] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1014 20:18:30.745115  421087 retry.go:31] will retry after 300.243195ms: missing components: kube-dns
	I1014 20:18:31.050699  421087 system_pods.go:86] 7 kube-system pods found
	I1014 20:18:31.050738  421087 system_pods.go:89] "coredns-66bc5c9577-t5q7c" [fd900037-32a6-4811-8a8c-134147706022] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1014 20:18:31.050748  421087 system_pods.go:89] "etcd-flannel-880673" [7a1195b6-1bf1-40a7-9aa9-cc2d5dafe2b9] Running
	I1014 20:18:31.050764  421087 system_pods.go:89] "kube-apiserver-flannel-880673" [2ab1b350-0f52-49dd-8282-546cd96104f0] Running
	I1014 20:18:31.050770  421087 system_pods.go:89] "kube-controller-manager-flannel-880673" [77718da6-91bf-4c39-936e-5acc7b6afb65] Running
	I1014 20:18:31.050776  421087 system_pods.go:89] "kube-proxy-js9r5" [f076c7d6-4d12-413f-bc46-f144270c72b2] Running
	I1014 20:18:31.050781  421087 system_pods.go:89] "kube-scheduler-flannel-880673" [3b779757-7765-4ed7-a710-794d57550f16] Running
	I1014 20:18:31.050811  421087 system_pods.go:89] "storage-provisioner" [3623d0c1-d2f8-45e3-b0f3-d09008f2fbc8] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1014 20:18:31.050835  421087 retry.go:31] will retry after 422.638473ms: missing components: kube-dns
	I1014 20:18:31.479212  421087 system_pods.go:86] 7 kube-system pods found
	I1014 20:18:31.479247  421087 system_pods.go:89] "coredns-66bc5c9577-t5q7c" [fd900037-32a6-4811-8a8c-134147706022] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1014 20:18:31.479253  421087 system_pods.go:89] "etcd-flannel-880673" [7a1195b6-1bf1-40a7-9aa9-cc2d5dafe2b9] Running
	I1014 20:18:31.479267  421087 system_pods.go:89] "kube-apiserver-flannel-880673" [2ab1b350-0f52-49dd-8282-546cd96104f0] Running
	I1014 20:18:31.479271  421087 system_pods.go:89] "kube-controller-manager-flannel-880673" [77718da6-91bf-4c39-936e-5acc7b6afb65] Running
	I1014 20:18:31.479274  421087 system_pods.go:89] "kube-proxy-js9r5" [f076c7d6-4d12-413f-bc46-f144270c72b2] Running
	I1014 20:18:31.479277  421087 system_pods.go:89] "kube-scheduler-flannel-880673" [3b779757-7765-4ed7-a710-794d57550f16] Running
	I1014 20:18:31.479287  421087 system_pods.go:89] "storage-provisioner" [3623d0c1-d2f8-45e3-b0f3-d09008f2fbc8] Running
	I1014 20:18:31.479305  421087 retry.go:31] will retry after 552.0673ms: missing components: kube-dns
	I1014 20:18:32.036669  421087 system_pods.go:86] 7 kube-system pods found
	I1014 20:18:32.036713  421087 system_pods.go:89] "coredns-66bc5c9577-t5q7c" [fd900037-32a6-4811-8a8c-134147706022] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1014 20:18:32.036723  421087 system_pods.go:89] "etcd-flannel-880673" [7a1195b6-1bf1-40a7-9aa9-cc2d5dafe2b9] Running
	I1014 20:18:32.036731  421087 system_pods.go:89] "kube-apiserver-flannel-880673" [2ab1b350-0f52-49dd-8282-546cd96104f0] Running
	I1014 20:18:32.036739  421087 system_pods.go:89] "kube-controller-manager-flannel-880673" [77718da6-91bf-4c39-936e-5acc7b6afb65] Running
	I1014 20:18:32.036745  421087 system_pods.go:89] "kube-proxy-js9r5" [f076c7d6-4d12-413f-bc46-f144270c72b2] Running
	I1014 20:18:32.036750  421087 system_pods.go:89] "kube-scheduler-flannel-880673" [3b779757-7765-4ed7-a710-794d57550f16] Running
	I1014 20:18:32.036757  421087 system_pods.go:89] "storage-provisioner" [3623d0c1-d2f8-45e3-b0f3-d09008f2fbc8] Running
	I1014 20:18:32.036779  421087 retry.go:31] will retry after 475.098529ms: missing components: kube-dns
	I1014 20:18:32.517112  421087 system_pods.go:86] 7 kube-system pods found
	I1014 20:18:32.517149  421087 system_pods.go:89] "coredns-66bc5c9577-t5q7c" [fd900037-32a6-4811-8a8c-134147706022] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1014 20:18:32.517155  421087 system_pods.go:89] "etcd-flannel-880673" [7a1195b6-1bf1-40a7-9aa9-cc2d5dafe2b9] Running
	I1014 20:18:32.517161  421087 system_pods.go:89] "kube-apiserver-flannel-880673" [2ab1b350-0f52-49dd-8282-546cd96104f0] Running
	I1014 20:18:32.517165  421087 system_pods.go:89] "kube-controller-manager-flannel-880673" [77718da6-91bf-4c39-936e-5acc7b6afb65] Running
	I1014 20:18:32.517169  421087 system_pods.go:89] "kube-proxy-js9r5" [f076c7d6-4d12-413f-bc46-f144270c72b2] Running
	I1014 20:18:32.517172  421087 system_pods.go:89] "kube-scheduler-flannel-880673" [3b779757-7765-4ed7-a710-794d57550f16] Running
	I1014 20:18:32.517176  421087 system_pods.go:89] "storage-provisioner" [3623d0c1-d2f8-45e3-b0f3-d09008f2fbc8] Running
	I1014 20:18:32.517197  421087 retry.go:31] will retry after 953.369281ms: missing components: kube-dns
	I1014 20:18:33.476303  421087 system_pods.go:86] 7 kube-system pods found
	I1014 20:18:33.476370  421087 system_pods.go:89] "coredns-66bc5c9577-t5q7c" [fd900037-32a6-4811-8a8c-134147706022] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1014 20:18:33.476377  421087 system_pods.go:89] "etcd-flannel-880673" [7a1195b6-1bf1-40a7-9aa9-cc2d5dafe2b9] Running
	I1014 20:18:33.476383  421087 system_pods.go:89] "kube-apiserver-flannel-880673" [2ab1b350-0f52-49dd-8282-546cd96104f0] Running
	I1014 20:18:33.476387  421087 system_pods.go:89] "kube-controller-manager-flannel-880673" [77718da6-91bf-4c39-936e-5acc7b6afb65] Running
	I1014 20:18:33.476392  421087 system_pods.go:89] "kube-proxy-js9r5" [f076c7d6-4d12-413f-bc46-f144270c72b2] Running
	I1014 20:18:33.476397  421087 system_pods.go:89] "kube-scheduler-flannel-880673" [3b779757-7765-4ed7-a710-794d57550f16] Running
	I1014 20:18:33.476402  421087 system_pods.go:89] "storage-provisioner" [3623d0c1-d2f8-45e3-b0f3-d09008f2fbc8] Running
	I1014 20:18:33.476425  421087 retry.go:31] will retry after 920.517462ms: missing components: kube-dns
	I1014 20:18:34.401821  421087 system_pods.go:86] 7 kube-system pods found
	I1014 20:18:34.401853  421087 system_pods.go:89] "coredns-66bc5c9577-t5q7c" [fd900037-32a6-4811-8a8c-134147706022] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1014 20:18:34.401859  421087 system_pods.go:89] "etcd-flannel-880673" [7a1195b6-1bf1-40a7-9aa9-cc2d5dafe2b9] Running
	I1014 20:18:34.401866  421087 system_pods.go:89] "kube-apiserver-flannel-880673" [2ab1b350-0f52-49dd-8282-546cd96104f0] Running
	I1014 20:18:34.401870  421087 system_pods.go:89] "kube-controller-manager-flannel-880673" [77718da6-91bf-4c39-936e-5acc7b6afb65] Running
	I1014 20:18:34.401873  421087 system_pods.go:89] "kube-proxy-js9r5" [f076c7d6-4d12-413f-bc46-f144270c72b2] Running
	I1014 20:18:34.401876  421087 system_pods.go:89] "kube-scheduler-flannel-880673" [3b779757-7765-4ed7-a710-794d57550f16] Running
	I1014 20:18:34.401879  421087 system_pods.go:89] "storage-provisioner" [3623d0c1-d2f8-45e3-b0f3-d09008f2fbc8] Running
	I1014 20:18:34.401896  421087 retry.go:31] will retry after 1.443477712s: missing components: kube-dns
	I1014 20:18:33.145737  421402 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 20:18:33.646138  421402 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 20:18:34.145926  421402 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 20:18:34.646210  421402 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 20:18:35.146099  421402 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 20:18:35.646366  421402 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 20:18:36.145624  421402 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 20:18:36.646172  421402 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 20:18:37.146399  421402 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 20:18:37.255131  421402 kubeadm.go:1113] duration metric: took 4.755429786s to wait for elevateKubeSystemPrivileges
	I1014 20:18:37.255172  421402 kubeadm.go:402] duration metric: took 17.763344151s to StartCluster
	I1014 20:18:37.255194  421402 settings.go:142] acquiring lock: {Name:mkb488b5c777750ffd68a70b951fb5c68c216ad7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 20:18:37.255288  421402 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21409-364627/kubeconfig
	I1014 20:18:37.257778  421402 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-364627/kubeconfig: {Name:mkd77480770143eefcec102bea219ed6716fd3f5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 20:18:37.258126  421402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1014 20:18:37.258144  421402 start.go:235] Will wait 15m0s for node &{Name: IP:192.168.61.105 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1014 20:18:37.258219  421402 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1014 20:18:37.258385  421402 addons.go:69] Setting storage-provisioner=true in profile "bridge-880673"
	I1014 20:18:37.258402  421402 addons.go:238] Setting addon storage-provisioner=true in "bridge-880673"
	I1014 20:18:37.258401  421402 config.go:182] Loaded profile config "bridge-880673": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1014 20:18:37.258434  421402 addons.go:69] Setting default-storageclass=true in profile "bridge-880673"
	I1014 20:18:37.258473  421402 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "bridge-880673"
	I1014 20:18:37.258445  421402 host.go:66] Checking if "bridge-880673" exists ...
	I1014 20:18:37.258940  421402 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 20:18:37.258978  421402 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 20:18:37.258992  421402 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 20:18:37.259022  421402 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 20:18:37.260006  421402 out.go:179] * Verifying Kubernetes components...
	I1014 20:18:37.261413  421402 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 20:18:37.274681  421402 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36923
	I1014 20:18:37.275247  421402 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39059
	I1014 20:18:37.275437  421402 main.go:141] libmachine: () Calling .GetVersion
	I1014 20:18:37.275840  421402 main.go:141] libmachine: () Calling .GetVersion
	I1014 20:18:37.276003  421402 main.go:141] libmachine: Using API Version  1
	I1014 20:18:37.276033  421402 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 20:18:37.276388  421402 main.go:141] libmachine: Using API Version  1
	I1014 20:18:37.276413  421402 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 20:18:37.276469  421402 main.go:141] libmachine: () Calling .GetMachineName
	I1014 20:18:37.276761  421402 main.go:141] libmachine: () Calling .GetMachineName
	I1014 20:18:37.276951  421402 main.go:141] libmachine: (bridge-880673) Calling .GetState
	I1014 20:18:37.277090  421402 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 20:18:37.277137  421402 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 20:18:37.282107  421402 addons.go:238] Setting addon default-storageclass=true in "bridge-880673"
	I1014 20:18:37.282166  421402 host.go:66] Checking if "bridge-880673" exists ...
	I1014 20:18:37.282652  421402 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 20:18:37.282708  421402 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 20:18:37.293798  421402 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35031
	I1014 20:18:37.294403  421402 main.go:141] libmachine: () Calling .GetVersion
	I1014 20:18:37.294981  421402 main.go:141] libmachine: Using API Version  1
	I1014 20:18:37.295009  421402 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 20:18:37.295443  421402 main.go:141] libmachine: () Calling .GetMachineName
	I1014 20:18:37.295715  421402 main.go:141] libmachine: (bridge-880673) Calling .GetState
	I1014 20:18:37.298299  421402 main.go:141] libmachine: (bridge-880673) Calling .DriverName
	I1014 20:18:37.298944  421402 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34633
	I1014 20:18:37.299497  421402 main.go:141] libmachine: () Calling .GetVersion
	I1014 20:18:37.300140  421402 main.go:141] libmachine: Using API Version  1
	I1014 20:18:37.300174  421402 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 20:18:37.300461  421402 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1014 20:18:37.301140  421402 main.go:141] libmachine: () Calling .GetMachineName
	I1014 20:18:37.301703  421402 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 20:18:37.301740  421402 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1014 20:18:37.301757  421402 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 20:18:37.301767  421402 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1014 20:18:37.301796  421402 main.go:141] libmachine: (bridge-880673) Calling .GetSSHHostname
	I1014 20:18:37.306325  421402 main.go:141] libmachine: (bridge-880673) DBG | domain bridge-880673 has defined MAC address 52:54:00:21:00:20 in network mk-bridge-880673
	I1014 20:18:37.306981  421402 main.go:141] libmachine: (bridge-880673) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:00:20", ip: ""} in network mk-bridge-880673: {Iface:virbr3 ExpiryTime:2025-10-14 21:18:07 +0000 UTC Type:0 Mac:52:54:00:21:00:20 Iaid: IPaddr:192.168.61.105 Prefix:24 Hostname:bridge-880673 Clientid:01:52:54:00:21:00:20}
	I1014 20:18:37.307019  421402 main.go:141] libmachine: (bridge-880673) DBG | domain bridge-880673 has defined IP address 192.168.61.105 and MAC address 52:54:00:21:00:20 in network mk-bridge-880673
	I1014 20:18:37.307281  421402 main.go:141] libmachine: (bridge-880673) Calling .GetSSHPort
	I1014 20:18:37.307528  421402 main.go:141] libmachine: (bridge-880673) Calling .GetSSHKeyPath
	I1014 20:18:37.307752  421402 main.go:141] libmachine: (bridge-880673) Calling .GetSSHUsername
	I1014 20:18:37.307934  421402 sshutil.go:53] new ssh client: &{IP:192.168.61.105 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21409-364627/.minikube/machines/bridge-880673/id_rsa Username:docker}
	I1014 20:18:37.318257  421402 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42107
	I1014 20:18:37.319117  421402 main.go:141] libmachine: () Calling .GetVersion
	I1014 20:18:37.319763  421402 main.go:141] libmachine: Using API Version  1
	I1014 20:18:37.319794  421402 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 20:18:37.320309  421402 main.go:141] libmachine: () Calling .GetMachineName
	I1014 20:18:37.320587  421402 main.go:141] libmachine: (bridge-880673) Calling .GetState
	I1014 20:18:37.323181  421402 main.go:141] libmachine: (bridge-880673) Calling .DriverName
	I1014 20:18:37.323508  421402 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1014 20:18:37.323528  421402 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1014 20:18:37.323565  421402 main.go:141] libmachine: (bridge-880673) Calling .GetSSHHostname
	I1014 20:18:37.327622  421402 main.go:141] libmachine: (bridge-880673) DBG | domain bridge-880673 has defined MAC address 52:54:00:21:00:20 in network mk-bridge-880673
	I1014 20:18:37.328225  421402 main.go:141] libmachine: (bridge-880673) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:00:20", ip: ""} in network mk-bridge-880673: {Iface:virbr3 ExpiryTime:2025-10-14 21:18:07 +0000 UTC Type:0 Mac:52:54:00:21:00:20 Iaid: IPaddr:192.168.61.105 Prefix:24 Hostname:bridge-880673 Clientid:01:52:54:00:21:00:20}
	I1014 20:18:37.328299  421402 main.go:141] libmachine: (bridge-880673) DBG | domain bridge-880673 has defined IP address 192.168.61.105 and MAC address 52:54:00:21:00:20 in network mk-bridge-880673
	I1014 20:18:37.328692  421402 main.go:141] libmachine: (bridge-880673) Calling .GetSSHPort
	I1014 20:18:37.328921  421402 main.go:141] libmachine: (bridge-880673) Calling .GetSSHKeyPath
	I1014 20:18:37.329180  421402 main.go:141] libmachine: (bridge-880673) Calling .GetSSHUsername
	I1014 20:18:37.329376  421402 sshutil.go:53] new ssh client: &{IP:192.168.61.105 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21409-364627/.minikube/machines/bridge-880673/id_rsa Username:docker}
	I1014 20:18:37.569065  421402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.61.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1014 20:18:37.673615  421402 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1014 20:18:37.979883  421402 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1014 20:18:38.035566  421402 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1014 20:18:38.581036  421402 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.61.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.011923093s)
	I1014 20:18:38.581064  421402 start.go:976] {"host.minikube.internal": 192.168.61.1} host record injected into CoreDNS's ConfigMap
	I1014 20:18:38.582891  421402 node_ready.go:35] waiting up to 15m0s for node "bridge-880673" to be "Ready" ...
	I1014 20:18:38.606013  421402 node_ready.go:49] node "bridge-880673" is "Ready"
	I1014 20:18:38.606049  421402 node_ready.go:38] duration metric: took 23.095286ms for node "bridge-880673" to be "Ready" ...
	I1014 20:18:38.606063  421402 api_server.go:52] waiting for apiserver process to appear ...
	I1014 20:18:38.606116  421402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 20:18:39.093412  421402 kapi.go:214] "coredns" deployment in "kube-system" namespace and "bridge-880673" context rescaled to 1 replicas
	I1014 20:18:39.219229  421402 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.18362169s)
	I1014 20:18:39.219308  421402 main.go:141] libmachine: Making call to close driver server
	I1014 20:18:39.219340  421402 api_server.go:72] duration metric: took 1.961152737s to wait for apiserver process to appear ...
	I1014 20:18:39.219366  421402 api_server.go:88] waiting for apiserver healthz status ...
	I1014 20:18:39.219370  421402 main.go:141] libmachine: (bridge-880673) Calling .Close
	I1014 20:18:39.219390  421402 api_server.go:253] Checking apiserver healthz at https://192.168.61.105:8443/healthz ...
	I1014 20:18:39.219437  421402 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.239515217s)
	I1014 20:18:39.219485  421402 main.go:141] libmachine: Making call to close driver server
	I1014 20:18:39.219497  421402 main.go:141] libmachine: (bridge-880673) Calling .Close
	I1014 20:18:39.219763  421402 main.go:141] libmachine: Successfully made call to close driver server
	I1014 20:18:39.219781  421402 main.go:141] libmachine: Making call to close connection to plugin binary
	I1014 20:18:39.219790  421402 main.go:141] libmachine: Making call to close driver server
	I1014 20:18:39.219792  421402 main.go:141] libmachine: Successfully made call to close driver server
	I1014 20:18:39.219798  421402 main.go:141] libmachine: (bridge-880673) Calling .Close
	I1014 20:18:39.219802  421402 main.go:141] libmachine: Making call to close connection to plugin binary
	I1014 20:18:39.219810  421402 main.go:141] libmachine: Making call to close driver server
	I1014 20:18:39.219816  421402 main.go:141] libmachine: (bridge-880673) Calling .Close
	I1014 20:18:39.219763  421402 main.go:141] libmachine: (bridge-880673) DBG | Closing plugin on server side
	I1014 20:18:39.220181  421402 main.go:141] libmachine: (bridge-880673) DBG | Closing plugin on server side
	I1014 20:18:39.220216  421402 main.go:141] libmachine: Successfully made call to close driver server
	I1014 20:18:39.220223  421402 main.go:141] libmachine: Making call to close connection to plugin binary
	I1014 20:18:39.220501  421402 main.go:141] libmachine: (bridge-880673) DBG | Closing plugin on server side
	I1014 20:18:39.220528  421402 main.go:141] libmachine: Successfully made call to close driver server
	I1014 20:18:39.220536  421402 main.go:141] libmachine: Making call to close connection to plugin binary
	I1014 20:18:39.232972  421402 api_server.go:279] https://192.168.61.105:8443/healthz returned 200:
	ok
	I1014 20:18:39.234990  421402 api_server.go:141] control plane version: v1.34.1
	I1014 20:18:39.235027  421402 api_server.go:131] duration metric: took 15.652258ms to wait for apiserver health ...
	I1014 20:18:39.235040  421402 system_pods.go:43] waiting for kube-system pods to appear ...
	I1014 20:18:39.242166  421402 system_pods.go:59] 8 kube-system pods found
	I1014 20:18:39.242220  421402 system_pods.go:61] "coredns-66bc5c9577-8b8hg" [70f51c62-064a-4e6c-961a-da0757f26ece] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1014 20:18:39.242234  421402 system_pods.go:61] "coredns-66bc5c9577-z9sbn" [941347c6-aa6a-4d96-b98c-abb8b48702c5] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1014 20:18:39.242243  421402 system_pods.go:61] "etcd-bridge-880673" [b0b12648-e276-47be-a4f1-6ddbd23fb520] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1014 20:18:39.242264  421402 system_pods.go:61] "kube-apiserver-bridge-880673" [736ac614-7af4-4c45-b48b-6f1f4de0d65c] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1014 20:18:39.242272  421402 system_pods.go:61] "kube-controller-manager-bridge-880673" [15158151-826b-4926-8d0f-79f63637d077] Running
	I1014 20:18:39.242284  421402 system_pods.go:61] "kube-proxy-b2vwp" [c5f3d981-d7da-4fbe-8cc3-603f7ee70a2f] Running
	I1014 20:18:39.242289  421402 system_pods.go:61] "kube-scheduler-bridge-880673" [7655ca5b-fd91-4b72-a132-99da3263baef] Running
	I1014 20:18:39.242297  421402 system_pods.go:61] "storage-provisioner" [436f99c1-6baf-41df-992f-a68144437bef] Pending
	I1014 20:18:39.242306  421402 system_pods.go:74] duration metric: took 7.258272ms to wait for pod list to return data ...
	I1014 20:18:39.242331  421402 default_sa.go:34] waiting for default service account to be created ...
	I1014 20:18:39.248981  421402 main.go:141] libmachine: Making call to close driver server
	I1014 20:18:39.249001  421402 main.go:141] libmachine: (bridge-880673) Calling .Close
	I1014 20:18:39.249369  421402 main.go:141] libmachine: Successfully made call to close driver server
	I1014 20:18:39.249392  421402 main.go:141] libmachine: Making call to close connection to plugin binary
	I1014 20:18:39.249373  421402 main.go:141] libmachine: (bridge-880673) DBG | Closing plugin on server side
	I1014 20:18:39.250981  421402 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1014 20:18:35.850842  421087 system_pods.go:86] 7 kube-system pods found
	I1014 20:18:35.850895  421087 system_pods.go:89] "coredns-66bc5c9577-t5q7c" [fd900037-32a6-4811-8a8c-134147706022] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1014 20:18:35.850906  421087 system_pods.go:89] "etcd-flannel-880673" [7a1195b6-1bf1-40a7-9aa9-cc2d5dafe2b9] Running
	I1014 20:18:35.850917  421087 system_pods.go:89] "kube-apiserver-flannel-880673" [2ab1b350-0f52-49dd-8282-546cd96104f0] Running
	I1014 20:18:35.850933  421087 system_pods.go:89] "kube-controller-manager-flannel-880673" [77718da6-91bf-4c39-936e-5acc7b6afb65] Running
	I1014 20:18:35.850945  421087 system_pods.go:89] "kube-proxy-js9r5" [f076c7d6-4d12-413f-bc46-f144270c72b2] Running
	I1014 20:18:35.850950  421087 system_pods.go:89] "kube-scheduler-flannel-880673" [3b779757-7765-4ed7-a710-794d57550f16] Running
	I1014 20:18:35.850955  421087 system_pods.go:89] "storage-provisioner" [3623d0c1-d2f8-45e3-b0f3-d09008f2fbc8] Running
	I1014 20:18:35.850981  421087 retry.go:31] will retry after 1.11930574s: missing components: kube-dns
	I1014 20:18:36.975755  421087 system_pods.go:86] 7 kube-system pods found
	I1014 20:18:36.975789  421087 system_pods.go:89] "coredns-66bc5c9577-t5q7c" [fd900037-32a6-4811-8a8c-134147706022] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1014 20:18:36.975795  421087 system_pods.go:89] "etcd-flannel-880673" [7a1195b6-1bf1-40a7-9aa9-cc2d5dafe2b9] Running
	I1014 20:18:36.975802  421087 system_pods.go:89] "kube-apiserver-flannel-880673" [2ab1b350-0f52-49dd-8282-546cd96104f0] Running
	I1014 20:18:36.975805  421087 system_pods.go:89] "kube-controller-manager-flannel-880673" [77718da6-91bf-4c39-936e-5acc7b6afb65] Running
	I1014 20:18:36.975809  421087 system_pods.go:89] "kube-proxy-js9r5" [f076c7d6-4d12-413f-bc46-f144270c72b2] Running
	I1014 20:18:36.975812  421087 system_pods.go:89] "kube-scheduler-flannel-880673" [3b779757-7765-4ed7-a710-794d57550f16] Running
	I1014 20:18:36.975815  421087 system_pods.go:89] "storage-provisioner" [3623d0c1-d2f8-45e3-b0f3-d09008f2fbc8] Running
	I1014 20:18:36.975830  421087 retry.go:31] will retry after 1.548344288s: missing components: kube-dns
	I1014 20:18:38.531860  421087 system_pods.go:86] 7 kube-system pods found
	I1014 20:18:38.531901  421087 system_pods.go:89] "coredns-66bc5c9577-t5q7c" [fd900037-32a6-4811-8a8c-134147706022] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1014 20:18:38.531909  421087 system_pods.go:89] "etcd-flannel-880673" [7a1195b6-1bf1-40a7-9aa9-cc2d5dafe2b9] Running
	I1014 20:18:38.531917  421087 system_pods.go:89] "kube-apiserver-flannel-880673" [2ab1b350-0f52-49dd-8282-546cd96104f0] Running
	I1014 20:18:38.531924  421087 system_pods.go:89] "kube-controller-manager-flannel-880673" [77718da6-91bf-4c39-936e-5acc7b6afb65] Running
	I1014 20:18:38.531930  421087 system_pods.go:89] "kube-proxy-js9r5" [f076c7d6-4d12-413f-bc46-f144270c72b2] Running
	I1014 20:18:38.531935  421087 system_pods.go:89] "kube-scheduler-flannel-880673" [3b779757-7765-4ed7-a710-794d57550f16] Running
	I1014 20:18:38.531939  421087 system_pods.go:89] "storage-provisioner" [3623d0c1-d2f8-45e3-b0f3-d09008f2fbc8] Running
	I1014 20:18:38.531961  421087 retry.go:31] will retry after 2.303983878s: missing components: kube-dns
	I1014 20:18:39.252229  421402 addons.go:514] duration metric: took 1.994016723s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1014 20:18:39.252625  421402 default_sa.go:45] found service account: "default"
	I1014 20:18:39.252645  421402 default_sa.go:55] duration metric: took 10.306166ms for default service account to be created ...
	I1014 20:18:39.252653  421402 system_pods.go:116] waiting for k8s-apps to be running ...
	I1014 20:18:39.256521  421402 system_pods.go:86] 8 kube-system pods found
	I1014 20:18:39.256549  421402 system_pods.go:89] "coredns-66bc5c9577-8b8hg" [70f51c62-064a-4e6c-961a-da0757f26ece] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1014 20:18:39.256555  421402 system_pods.go:89] "coredns-66bc5c9577-z9sbn" [941347c6-aa6a-4d96-b98c-abb8b48702c5] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1014 20:18:39.256563  421402 system_pods.go:89] "etcd-bridge-880673" [b0b12648-e276-47be-a4f1-6ddbd23fb520] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1014 20:18:39.256568  421402 system_pods.go:89] "kube-apiserver-bridge-880673" [736ac614-7af4-4c45-b48b-6f1f4de0d65c] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1014 20:18:39.256572  421402 system_pods.go:89] "kube-controller-manager-bridge-880673" [15158151-826b-4926-8d0f-79f63637d077] Running
	I1014 20:18:39.256576  421402 system_pods.go:89] "kube-proxy-b2vwp" [c5f3d981-d7da-4fbe-8cc3-603f7ee70a2f] Running
	I1014 20:18:39.256579  421402 system_pods.go:89] "kube-scheduler-bridge-880673" [7655ca5b-fd91-4b72-a132-99da3263baef] Running
	I1014 20:18:39.256583  421402 system_pods.go:89] "storage-provisioner" [436f99c1-6baf-41df-992f-a68144437bef] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1014 20:18:39.256590  421402 system_pods.go:126] duration metric: took 3.932245ms to wait for k8s-apps to be running ...
	I1014 20:18:39.256599  421402 system_svc.go:44] waiting for kubelet service to be running ....
	I1014 20:18:39.256645  421402 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1014 20:18:39.282073  421402 system_svc.go:56] duration metric: took 25.461591ms WaitForService to wait for kubelet
	I1014 20:18:39.282105  421402 kubeadm.go:586] duration metric: took 2.023923097s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1014 20:18:39.282124  421402 node_conditions.go:102] verifying NodePressure condition ...
	I1014 20:18:39.286484  421402 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1014 20:18:39.286511  421402 node_conditions.go:123] node cpu capacity is 2
	I1014 20:18:39.286525  421402 node_conditions.go:105] duration metric: took 4.396181ms to run NodePressure ...
	I1014 20:18:39.286536  421402 start.go:241] waiting for startup goroutines ...
	I1014 20:18:39.286542  421402 start.go:246] waiting for cluster config update ...
	I1014 20:18:39.286553  421402 start.go:255] writing updated cluster config ...
	I1014 20:18:39.286810  421402 ssh_runner.go:195] Run: rm -f paused
	I1014 20:18:39.293433  421402 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1014 20:18:39.297504  421402 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-8b8hg" in "kube-system" namespace to be "Ready" or be gone ...
	W1014 20:18:41.305400  421402 pod_ready.go:104] pod "coredns-66bc5c9577-8b8hg" is not "Ready", error: <nil>
	I1014 20:18:40.840752  421087 system_pods.go:86] 7 kube-system pods found
	I1014 20:18:40.840790  421087 system_pods.go:89] "coredns-66bc5c9577-t5q7c" [fd900037-32a6-4811-8a8c-134147706022] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1014 20:18:40.840797  421087 system_pods.go:89] "etcd-flannel-880673" [7a1195b6-1bf1-40a7-9aa9-cc2d5dafe2b9] Running
	I1014 20:18:40.840804  421087 system_pods.go:89] "kube-apiserver-flannel-880673" [2ab1b350-0f52-49dd-8282-546cd96104f0] Running
	I1014 20:18:40.840844  421087 system_pods.go:89] "kube-controller-manager-flannel-880673" [77718da6-91bf-4c39-936e-5acc7b6afb65] Running
	I1014 20:18:40.840854  421087 system_pods.go:89] "kube-proxy-js9r5" [f076c7d6-4d12-413f-bc46-f144270c72b2] Running
	I1014 20:18:40.840859  421087 system_pods.go:89] "kube-scheduler-flannel-880673" [3b779757-7765-4ed7-a710-794d57550f16] Running
	I1014 20:18:40.840862  421087 system_pods.go:89] "storage-provisioner" [3623d0c1-d2f8-45e3-b0f3-d09008f2fbc8] Running
	I1014 20:18:40.840878  421087 retry.go:31] will retry after 3.033191594s: missing components: kube-dns
	I1014 20:18:43.880195  421087 system_pods.go:86] 7 kube-system pods found
	I1014 20:18:43.880247  421087 system_pods.go:89] "coredns-66bc5c9577-t5q7c" [fd900037-32a6-4811-8a8c-134147706022] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1014 20:18:43.880259  421087 system_pods.go:89] "etcd-flannel-880673" [7a1195b6-1bf1-40a7-9aa9-cc2d5dafe2b9] Running
	I1014 20:18:43.880268  421087 system_pods.go:89] "kube-apiserver-flannel-880673" [2ab1b350-0f52-49dd-8282-546cd96104f0] Running
	I1014 20:18:43.880274  421087 system_pods.go:89] "kube-controller-manager-flannel-880673" [77718da6-91bf-4c39-936e-5acc7b6afb65] Running
	I1014 20:18:43.880281  421087 system_pods.go:89] "kube-proxy-js9r5" [f076c7d6-4d12-413f-bc46-f144270c72b2] Running
	I1014 20:18:43.880287  421087 system_pods.go:89] "kube-scheduler-flannel-880673" [3b779757-7765-4ed7-a710-794d57550f16] Running
	I1014 20:18:43.880293  421087 system_pods.go:89] "storage-provisioner" [3623d0c1-d2f8-45e3-b0f3-d09008f2fbc8] Running
	I1014 20:18:43.880340  421087 retry.go:31] will retry after 3.158409259s: missing components: kube-dns
	W1014 20:18:43.306042  421402 pod_ready.go:104] pod "coredns-66bc5c9577-8b8hg" is not "Ready", error: <nil>
	W1014 20:18:45.806697  421402 pod_ready.go:104] pod "coredns-66bc5c9577-8b8hg" is not "Ready", error: <nil>
	W1014 20:18:47.807458  421402 pod_ready.go:104] pod "coredns-66bc5c9577-8b8hg" is not "Ready", error: <nil>
	I1014 20:18:47.043989  421087 system_pods.go:86] 7 kube-system pods found
	I1014 20:18:47.044031  421087 system_pods.go:89] "coredns-66bc5c9577-t5q7c" [fd900037-32a6-4811-8a8c-134147706022] Running
	I1014 20:18:47.044040  421087 system_pods.go:89] "etcd-flannel-880673" [7a1195b6-1bf1-40a7-9aa9-cc2d5dafe2b9] Running
	I1014 20:18:47.044047  421087 system_pods.go:89] "kube-apiserver-flannel-880673" [2ab1b350-0f52-49dd-8282-546cd96104f0] Running
	I1014 20:18:47.044055  421087 system_pods.go:89] "kube-controller-manager-flannel-880673" [77718da6-91bf-4c39-936e-5acc7b6afb65] Running
	I1014 20:18:47.044063  421087 system_pods.go:89] "kube-proxy-js9r5" [f076c7d6-4d12-413f-bc46-f144270c72b2] Running
	I1014 20:18:47.044068  421087 system_pods.go:89] "kube-scheduler-flannel-880673" [3b779757-7765-4ed7-a710-794d57550f16] Running
	I1014 20:18:47.044073  421087 system_pods.go:89] "storage-provisioner" [3623d0c1-d2f8-45e3-b0f3-d09008f2fbc8] Running
	I1014 20:18:47.044086  421087 system_pods.go:126] duration metric: took 16.677042702s to wait for k8s-apps to be running ...
	I1014 20:18:47.044103  421087 system_svc.go:44] waiting for kubelet service to be running ....
	I1014 20:18:47.044166  421087 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1014 20:18:47.065113  421087 system_svc.go:56] duration metric: took 20.994672ms WaitForService to wait for kubelet
	I1014 20:18:47.065149  421087 kubeadm.go:586] duration metric: took 30.60819358s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1014 20:18:47.065168  421087 node_conditions.go:102] verifying NodePressure condition ...
	I1014 20:18:47.069360  421087 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1014 20:18:47.069389  421087 node_conditions.go:123] node cpu capacity is 2
	I1014 20:18:47.069403  421087 node_conditions.go:105] duration metric: took 4.230142ms to run NodePressure ...
	I1014 20:18:47.069415  421087 start.go:241] waiting for startup goroutines ...
	I1014 20:18:47.069422  421087 start.go:246] waiting for cluster config update ...
	I1014 20:18:47.069432  421087 start.go:255] writing updated cluster config ...
	I1014 20:18:47.069740  421087 ssh_runner.go:195] Run: rm -f paused
	I1014 20:18:47.075122  421087 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1014 20:18:47.081609  421087 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-t5q7c" in "kube-system" namespace to be "Ready" or be gone ...
	I1014 20:18:47.089859  421087 pod_ready.go:94] pod "coredns-66bc5c9577-t5q7c" is "Ready"
	I1014 20:18:47.089897  421087 pod_ready.go:86] duration metric: took 8.262276ms for pod "coredns-66bc5c9577-t5q7c" in "kube-system" namespace to be "Ready" or be gone ...
	I1014 20:18:47.092266  421087 pod_ready.go:83] waiting for pod "etcd-flannel-880673" in "kube-system" namespace to be "Ready" or be gone ...
	I1014 20:18:47.096766  421087 pod_ready.go:94] pod "etcd-flannel-880673" is "Ready"
	I1014 20:18:47.096790  421087 pod_ready.go:86] duration metric: took 4.49984ms for pod "etcd-flannel-880673" in "kube-system" namespace to be "Ready" or be gone ...
	I1014 20:18:47.098892  421087 pod_ready.go:83] waiting for pod "kube-apiserver-flannel-880673" in "kube-system" namespace to be "Ready" or be gone ...
	I1014 20:18:47.104135  421087 pod_ready.go:94] pod "kube-apiserver-flannel-880673" is "Ready"
	I1014 20:18:47.104168  421087 pod_ready.go:86] duration metric: took 5.248988ms for pod "kube-apiserver-flannel-880673" in "kube-system" namespace to be "Ready" or be gone ...
	I1014 20:18:47.107825  421087 pod_ready.go:83] waiting for pod "kube-controller-manager-flannel-880673" in "kube-system" namespace to be "Ready" or be gone ...
	I1014 20:18:47.481377  421087 pod_ready.go:94] pod "kube-controller-manager-flannel-880673" is "Ready"
	I1014 20:18:47.481408  421087 pod_ready.go:86] duration metric: took 373.557889ms for pod "kube-controller-manager-flannel-880673" in "kube-system" namespace to be "Ready" or be gone ...
	I1014 20:18:47.680640  421087 pod_ready.go:83] waiting for pod "kube-proxy-js9r5" in "kube-system" namespace to be "Ready" or be gone ...
	I1014 20:18:48.082711  421087 pod_ready.go:94] pod "kube-proxy-js9r5" is "Ready"
	I1014 20:18:48.082746  421087 pod_ready.go:86] duration metric: took 402.071673ms for pod "kube-proxy-js9r5" in "kube-system" namespace to be "Ready" or be gone ...
	I1014 20:18:48.280325  421087 pod_ready.go:83] waiting for pod "kube-scheduler-flannel-880673" in "kube-system" namespace to be "Ready" or be gone ...
	I1014 20:18:48.681055  421087 pod_ready.go:94] pod "kube-scheduler-flannel-880673" is "Ready"
	I1014 20:18:48.681090  421087 pod_ready.go:86] duration metric: took 400.726895ms for pod "kube-scheduler-flannel-880673" in "kube-system" namespace to be "Ready" or be gone ...
	I1014 20:18:48.681105  421087 pod_ready.go:40] duration metric: took 1.605941594s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1014 20:18:48.731111  421087 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1014 20:18:48.733952  421087 out.go:179] * Done! kubectl is now configured to use "flannel-880673" cluster and "default" namespace by default
	I1014 20:18:50.300575  421402 pod_ready.go:99] pod "coredns-66bc5c9577-8b8hg" in "kube-system" namespace is gone: getting pod "coredns-66bc5c9577-8b8hg" in "kube-system" namespace (will retry): pods "coredns-66bc5c9577-8b8hg" not found
	I1014 20:18:50.300611  421402 pod_ready.go:86] duration metric: took 11.003081713s for pod "coredns-66bc5c9577-8b8hg" in "kube-system" namespace to be "Ready" or be gone ...
	I1014 20:18:50.300627  421402 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-z9sbn" in "kube-system" namespace to be "Ready" or be gone ...
	W1014 20:18:52.306596  421402 pod_ready.go:104] pod "coredns-66bc5c9577-z9sbn" is not "Ready", error: <nil>
	W1014 20:18:54.306681  421402 pod_ready.go:104] pod "coredns-66bc5c9577-z9sbn" is not "Ready", error: <nil>
	W1014 20:18:56.308060  421402 pod_ready.go:104] pod "coredns-66bc5c9577-z9sbn" is not "Ready", error: <nil>
	W1014 20:18:58.308167  421402 pod_ready.go:104] pod "coredns-66bc5c9577-z9sbn" is not "Ready", error: <nil>
	W1014 20:19:00.309255  421402 pod_ready.go:104] pod "coredns-66bc5c9577-z9sbn" is not "Ready", error: <nil>
	W1014 20:19:02.806751  421402 pod_ready.go:104] pod "coredns-66bc5c9577-z9sbn" is not "Ready", error: <nil>
	W1014 20:19:04.807617  421402 pod_ready.go:104] pod "coredns-66bc5c9577-z9sbn" is not "Ready", error: <nil>
	W1014 20:19:06.808157  421402 pod_ready.go:104] pod "coredns-66bc5c9577-z9sbn" is not "Ready", error: <nil>
	W1014 20:19:08.808270  421402 pod_ready.go:104] pod "coredns-66bc5c9577-z9sbn" is not "Ready", error: <nil>
	W1014 20:19:11.309833  421402 pod_ready.go:104] pod "coredns-66bc5c9577-z9sbn" is not "Ready", error: <nil>
	W1014 20:19:13.808468  421402 pod_ready.go:104] pod "coredns-66bc5c9577-z9sbn" is not "Ready", error: <nil>
	I1014 20:19:16.307018  421402 pod_ready.go:94] pod "coredns-66bc5c9577-z9sbn" is "Ready"
	I1014 20:19:16.307069  421402 pod_ready.go:86] duration metric: took 26.006434395s for pod "coredns-66bc5c9577-z9sbn" in "kube-system" namespace to be "Ready" or be gone ...
	I1014 20:19:16.310071  421402 pod_ready.go:83] waiting for pod "etcd-bridge-880673" in "kube-system" namespace to be "Ready" or be gone ...
	I1014 20:19:16.314834  421402 pod_ready.go:94] pod "etcd-bridge-880673" is "Ready"
	I1014 20:19:16.314860  421402 pod_ready.go:86] duration metric: took 4.757751ms for pod "etcd-bridge-880673" in "kube-system" namespace to be "Ready" or be gone ...
	I1014 20:19:16.316992  421402 pod_ready.go:83] waiting for pod "kube-apiserver-bridge-880673" in "kube-system" namespace to be "Ready" or be gone ...
	I1014 20:19:16.321169  421402 pod_ready.go:94] pod "kube-apiserver-bridge-880673" is "Ready"
	I1014 20:19:16.321205  421402 pod_ready.go:86] duration metric: took 4.190547ms for pod "kube-apiserver-bridge-880673" in "kube-system" namespace to be "Ready" or be gone ...
	I1014 20:19:16.323265  421402 pod_ready.go:83] waiting for pod "kube-controller-manager-bridge-880673" in "kube-system" namespace to be "Ready" or be gone ...
	I1014 20:19:16.504811  421402 pod_ready.go:94] pod "kube-controller-manager-bridge-880673" is "Ready"
	I1014 20:19:16.504838  421402 pod_ready.go:86] duration metric: took 181.547833ms for pod "kube-controller-manager-bridge-880673" in "kube-system" namespace to be "Ready" or be gone ...
	I1014 20:19:16.704795  421402 pod_ready.go:83] waiting for pod "kube-proxy-b2vwp" in "kube-system" namespace to be "Ready" or be gone ...
	I1014 20:19:17.104667  421402 pod_ready.go:94] pod "kube-proxy-b2vwp" is "Ready"
	I1014 20:19:17.104697  421402 pod_ready.go:86] duration metric: took 399.871111ms for pod "kube-proxy-b2vwp" in "kube-system" namespace to be "Ready" or be gone ...
	I1014 20:19:17.305175  421402 pod_ready.go:83] waiting for pod "kube-scheduler-bridge-880673" in "kube-system" namespace to be "Ready" or be gone ...
	I1014 20:19:17.704034  421402 pod_ready.go:94] pod "kube-scheduler-bridge-880673" is "Ready"
	I1014 20:19:17.704059  421402 pod_ready.go:86] duration metric: took 398.852515ms for pod "kube-scheduler-bridge-880673" in "kube-system" namespace to be "Ready" or be gone ...
	I1014 20:19:17.704072  421402 pod_ready.go:40] duration metric: took 38.410605028s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1014 20:19:17.752777  421402 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1014 20:19:17.754591  421402 out.go:179] * Done! kubectl is now configured to use "bridge-880673" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 14 20:25:18 embed-certs-158674 crio[883]: time="2025-10-14 20:25:18.055278496Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:485f6b335715e70a22f735bb8b4cf1fa6506ac46172580a4ffe008b729eb0f86,PodSandboxId:c6f965189f5cca674456d2c19a018311a12e0fd6c5258e7bc01d33b64fb1d20b,Metadata:&ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:6,},Image:&ImageSpec{Image:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,State:CONTAINER_EXITED,CreatedAt:1760473366425428323,Labels:map[string]string{io.kubernetes.container.name: dashboard-metrics-scraper,io.kubernetes.pod.name: dashboard-metrics-scraper-6ffb444bf9-mz5cm,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 30fef46b-43ef-4af7-b50b-ba3f07a7afde,},Annotations:map[string]string{io.kubernetes.container.hash: c78228a5,io.kubernetes.
container.ports: [{\"containerPort\":8000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 6,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:329ec9ece1ee87ff29d15aa2b946df57d3f4b3baa58d36ea71db147fb16de05e,PodSandboxId:cbee72f0353c9323b8de563661cfc4947729493bed80fb58b73774f6a414531d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1760472991749515369,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c51ba3c5-d480-40c6-80fe-a7b956740e03,},Annotations:map[
string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d94ac69e5f9049b9bfe47fec10578ac2f231a4654689654e974e89467fcb9dcb,PodSandboxId:62896cd3ba6e546c712996b4a3cc63cdaa829335e455030501caa64571b81b8a,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1760472971908454011,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f752530f-0f63-471b-86c5-be4cafc867f8,},Annotations:map[string]string{io.
kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:19cd1852378ae25b0274781540c1f41994de4f1cb92cff554fcccb073b4da86f,PodSandboxId:aceff373a9141797bd735c9ebbf347030abf7e9e92e30d523350db102706a413,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1760472968511607743,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-ct9rr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15723a94-e4e7-4bd3-95bd-f264ebed028b,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf
792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c43842bd6420c9d4f21a272e0491c1c894b300385fb8de9d017693e0cace060f,PodSandboxId:c6b15e6c26fa2d58b7ad8bcbbdf48acfa369f17c35d550af968ebb99696071bb,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1
a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1760472960836694070,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rh6wc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 914eea30-a7ad-442e-a8e2-ae0b47413336,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:003294c62d4e668811ca87e3130bdcfb3915e9bebe89b0c4f4ca79a82b670740,PodSandboxId:cbee72f0353c9323b8de563661cfc4947729493bed80fb58b73774f6a414531d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTA
INER_EXITED,CreatedAt:1760472960791138577,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c51ba3c5-d480-40c6-80fe-a7b956740e03,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c22c86b8a4ca8f03f41fa8b78491678bca1c8bbd07958f6836c9e5f3b0291e3,PodSandboxId:e1eb325c1e8e2388bf40ee0e7b70495b5b57ebdd439c9685ed18d900b5e0e46d,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:17604
72955974434671,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-158674,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ecd5998fe8fd7e24c8e5315c5cb0861b,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5ad28306f0acdab374f3c55d2549350c2887aa8919aad75177b3c04c6c780a4e,PodSandboxId:5c55a53d5d8e4e7829930e67c407e1ea346febb61b502891e58aff661fdb9f38,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7e
aae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1760472955967113869,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-158674,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8f768e14cd16d80179285f991605a4ba,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:760bc07e5c70450d801479d1c3dca4c1f4755d6173b8ade6dce7188b7cf47003,PodSandboxId:adeb9462d402a28252c44a46bf5dcfec10b9228692635ba1599a7e6dbc2f1745,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f
3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1760472955937234543,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-158674,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3b788033deaae79c31f8d0cb65deb34f,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1e686aedaa84ab231d58a6ab25ad7d3c53ad049d73f8339a8b04909f1dc8ced8,PodSandboxId:983272502e832541cb785f2cd1f38db8b2d20ccf383ab4ecd9df2ce933
be0578,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1760472955899422847,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-158674,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5eb45808bf0719072e2077942e6532db,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go
:74" id=bb7d086b-a70d-410d-99c7-98d387ccca65 name=/runtime.v1.RuntimeService/ListContainers
	Oct 14 20:25:18 embed-certs-158674 crio[883]: time="2025-10-14 20:25:18.093884815Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=bf8e7605-de06-46ef-bf0f-26044f9d4609 name=/runtime.v1.RuntimeService/Version
	Oct 14 20:25:18 embed-certs-158674 crio[883]: time="2025-10-14 20:25:18.094013790Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=bf8e7605-de06-46ef-bf0f-26044f9d4609 name=/runtime.v1.RuntimeService/Version
	Oct 14 20:25:18 embed-certs-158674 crio[883]: time="2025-10-14 20:25:18.094993785Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=5998ff63-b3c2-4bba-90a6-785dfc395e11 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 14 20:25:18 embed-certs-158674 crio[883]: time="2025-10-14 20:25:18.095441421Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1760473518095397671,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:176956,},InodesUsed:&UInt64Value{Value:65,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=5998ff63-b3c2-4bba-90a6-785dfc395e11 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 14 20:25:18 embed-certs-158674 crio[883]: time="2025-10-14 20:25:18.096014715Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ca2583ff-5bb0-4049-b12c-cc65a1fa982c name=/runtime.v1.RuntimeService/ListContainers
	Oct 14 20:25:18 embed-certs-158674 crio[883]: time="2025-10-14 20:25:18.096084023Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ca2583ff-5bb0-4049-b12c-cc65a1fa982c name=/runtime.v1.RuntimeService/ListContainers
	Oct 14 20:25:18 embed-certs-158674 crio[883]: time="2025-10-14 20:25:18.096280693Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:485f6b335715e70a22f735bb8b4cf1fa6506ac46172580a4ffe008b729eb0f86,PodSandboxId:c6f965189f5cca674456d2c19a018311a12e0fd6c5258e7bc01d33b64fb1d20b,Metadata:&ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:6,},Image:&ImageSpec{Image:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,State:CONTAINER_EXITED,CreatedAt:1760473366425428323,Labels:map[string]string{io.kubernetes.container.name: dashboard-metrics-scraper,io.kubernetes.pod.name: dashboard-metrics-scraper-6ffb444bf9-mz5cm,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 30fef46b-43ef-4af7-b50b-ba3f07a7afde,},Annotations:map[string]string{io.kubernetes.container.hash: c78228a5,io.kubernetes.
container.ports: [{\"containerPort\":8000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 6,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:329ec9ece1ee87ff29d15aa2b946df57d3f4b3baa58d36ea71db147fb16de05e,PodSandboxId:cbee72f0353c9323b8de563661cfc4947729493bed80fb58b73774f6a414531d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1760472991749515369,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c51ba3c5-d480-40c6-80fe-a7b956740e03,},Annotations:map[
string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d94ac69e5f9049b9bfe47fec10578ac2f231a4654689654e974e89467fcb9dcb,PodSandboxId:62896cd3ba6e546c712996b4a3cc63cdaa829335e455030501caa64571b81b8a,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1760472971908454011,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f752530f-0f63-471b-86c5-be4cafc867f8,},Annotations:map[string]string{io.
kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:19cd1852378ae25b0274781540c1f41994de4f1cb92cff554fcccb073b4da86f,PodSandboxId:aceff373a9141797bd735c9ebbf347030abf7e9e92e30d523350db102706a413,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1760472968511607743,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-ct9rr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15723a94-e4e7-4bd3-95bd-f264ebed028b,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf
792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c43842bd6420c9d4f21a272e0491c1c894b300385fb8de9d017693e0cace060f,PodSandboxId:c6b15e6c26fa2d58b7ad8bcbbdf48acfa369f17c35d550af968ebb99696071bb,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1
a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1760472960836694070,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rh6wc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 914eea30-a7ad-442e-a8e2-ae0b47413336,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:003294c62d4e668811ca87e3130bdcfb3915e9bebe89b0c4f4ca79a82b670740,PodSandboxId:cbee72f0353c9323b8de563661cfc4947729493bed80fb58b73774f6a414531d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTA
INER_EXITED,CreatedAt:1760472960791138577,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c51ba3c5-d480-40c6-80fe-a7b956740e03,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c22c86b8a4ca8f03f41fa8b78491678bca1c8bbd07958f6836c9e5f3b0291e3,PodSandboxId:e1eb325c1e8e2388bf40ee0e7b70495b5b57ebdd439c9685ed18d900b5e0e46d,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:17604
72955974434671,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-158674,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ecd5998fe8fd7e24c8e5315c5cb0861b,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5ad28306f0acdab374f3c55d2549350c2887aa8919aad75177b3c04c6c780a4e,PodSandboxId:5c55a53d5d8e4e7829930e67c407e1ea346febb61b502891e58aff661fdb9f38,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7e
aae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1760472955967113869,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-158674,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8f768e14cd16d80179285f991605a4ba,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:760bc07e5c70450d801479d1c3dca4c1f4755d6173b8ade6dce7188b7cf47003,PodSandboxId:adeb9462d402a28252c44a46bf5dcfec10b9228692635ba1599a7e6dbc2f1745,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f
3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1760472955937234543,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-158674,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3b788033deaae79c31f8d0cb65deb34f,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1e686aedaa84ab231d58a6ab25ad7d3c53ad049d73f8339a8b04909f1dc8ced8,PodSandboxId:983272502e832541cb785f2cd1f38db8b2d20ccf383ab4ecd9df2ce933
be0578,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1760472955899422847,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-158674,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5eb45808bf0719072e2077942e6532db,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go
:74" id=ca2583ff-5bb0-4049-b12c-cc65a1fa982c name=/runtime.v1.RuntimeService/ListContainers
	Oct 14 20:25:18 embed-certs-158674 crio[883]: time="2025-10-14 20:25:18.136727821Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=a7d6a328-44f4-45f3-aef5-0bc6ae8d156a name=/runtime.v1.RuntimeService/Version
	Oct 14 20:25:18 embed-certs-158674 crio[883]: time="2025-10-14 20:25:18.136820614Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a7d6a328-44f4-45f3-aef5-0bc6ae8d156a name=/runtime.v1.RuntimeService/Version
	Oct 14 20:25:18 embed-certs-158674 crio[883]: time="2025-10-14 20:25:18.138376497Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d3ddada1-903d-4e9e-b9b5-93dd79c19169 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 14 20:25:18 embed-certs-158674 crio[883]: time="2025-10-14 20:25:18.139532763Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1760473518139461722,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:176956,},InodesUsed:&UInt64Value{Value:65,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d3ddada1-903d-4e9e-b9b5-93dd79c19169 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 14 20:25:18 embed-certs-158674 crio[883]: time="2025-10-14 20:25:18.140248778Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=49d6cc6e-9603-4d46-afb8-8336c2869d41 name=/runtime.v1.RuntimeService/ListContainers
	Oct 14 20:25:18 embed-certs-158674 crio[883]: time="2025-10-14 20:25:18.140501539Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=49d6cc6e-9603-4d46-afb8-8336c2869d41 name=/runtime.v1.RuntimeService/ListContainers
	Oct 14 20:25:18 embed-certs-158674 crio[883]: time="2025-10-14 20:25:18.140893770Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:485f6b335715e70a22f735bb8b4cf1fa6506ac46172580a4ffe008b729eb0f86,PodSandboxId:c6f965189f5cca674456d2c19a018311a12e0fd6c5258e7bc01d33b64fb1d20b,Metadata:&ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:6,},Image:&ImageSpec{Image:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,State:CONTAINER_EXITED,CreatedAt:1760473366425428323,Labels:map[string]string{io.kubernetes.container.name: dashboard-metrics-scraper,io.kubernetes.pod.name: dashboard-metrics-scraper-6ffb444bf9-mz5cm,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 30fef46b-43ef-4af7-b50b-ba3f07a7afde,},Annotations:map[string]string{io.kubernetes.container.hash: c78228a5,io.kubernetes.
container.ports: [{\"containerPort\":8000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 6,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:329ec9ece1ee87ff29d15aa2b946df57d3f4b3baa58d36ea71db147fb16de05e,PodSandboxId:cbee72f0353c9323b8de563661cfc4947729493bed80fb58b73774f6a414531d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1760472991749515369,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c51ba3c5-d480-40c6-80fe-a7b956740e03,},Annotations:map[
string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d94ac69e5f9049b9bfe47fec10578ac2f231a4654689654e974e89467fcb9dcb,PodSandboxId:62896cd3ba6e546c712996b4a3cc63cdaa829335e455030501caa64571b81b8a,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1760472971908454011,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f752530f-0f63-471b-86c5-be4cafc867f8,},Annotations:map[string]string{io.
kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:19cd1852378ae25b0274781540c1f41994de4f1cb92cff554fcccb073b4da86f,PodSandboxId:aceff373a9141797bd735c9ebbf347030abf7e9e92e30d523350db102706a413,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1760472968511607743,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-ct9rr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15723a94-e4e7-4bd3-95bd-f264ebed028b,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf
792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c43842bd6420c9d4f21a272e0491c1c894b300385fb8de9d017693e0cace060f,PodSandboxId:c6b15e6c26fa2d58b7ad8bcbbdf48acfa369f17c35d550af968ebb99696071bb,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1
a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1760472960836694070,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rh6wc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 914eea30-a7ad-442e-a8e2-ae0b47413336,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:003294c62d4e668811ca87e3130bdcfb3915e9bebe89b0c4f4ca79a82b670740,PodSandboxId:cbee72f0353c9323b8de563661cfc4947729493bed80fb58b73774f6a414531d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTA
INER_EXITED,CreatedAt:1760472960791138577,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c51ba3c5-d480-40c6-80fe-a7b956740e03,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c22c86b8a4ca8f03f41fa8b78491678bca1c8bbd07958f6836c9e5f3b0291e3,PodSandboxId:e1eb325c1e8e2388bf40ee0e7b70495b5b57ebdd439c9685ed18d900b5e0e46d,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:17604
72955974434671,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-158674,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ecd5998fe8fd7e24c8e5315c5cb0861b,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5ad28306f0acdab374f3c55d2549350c2887aa8919aad75177b3c04c6c780a4e,PodSandboxId:5c55a53d5d8e4e7829930e67c407e1ea346febb61b502891e58aff661fdb9f38,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7e
aae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1760472955967113869,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-158674,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8f768e14cd16d80179285f991605a4ba,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:760bc07e5c70450d801479d1c3dca4c1f4755d6173b8ade6dce7188b7cf47003,PodSandboxId:adeb9462d402a28252c44a46bf5dcfec10b9228692635ba1599a7e6dbc2f1745,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f
3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1760472955937234543,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-158674,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3b788033deaae79c31f8d0cb65deb34f,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1e686aedaa84ab231d58a6ab25ad7d3c53ad049d73f8339a8b04909f1dc8ced8,PodSandboxId:983272502e832541cb785f2cd1f38db8b2d20ccf383ab4ecd9df2ce933
be0578,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1760472955899422847,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-158674,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5eb45808bf0719072e2077942e6532db,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go
:74" id=49d6cc6e-9603-4d46-afb8-8336c2869d41 name=/runtime.v1.RuntimeService/ListContainers
	Oct 14 20:25:18 embed-certs-158674 crio[883]: time="2025-10-14 20:25:18.185133529Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=b6e6c13f-ef70-49cc-a367-18aefba6e5e5 name=/runtime.v1.RuntimeService/Version
	Oct 14 20:25:18 embed-certs-158674 crio[883]: time="2025-10-14 20:25:18.185257312Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=b6e6c13f-ef70-49cc-a367-18aefba6e5e5 name=/runtime.v1.RuntimeService/Version
	Oct 14 20:25:18 embed-certs-158674 crio[883]: time="2025-10-14 20:25:18.186374215Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=bc2a9e54-3876-48c9-973c-5cd28dc35002 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 14 20:25:18 embed-certs-158674 crio[883]: time="2025-10-14 20:25:18.187724639Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1760473518187687000,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:176956,},InodesUsed:&UInt64Value{Value:65,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=bc2a9e54-3876-48c9-973c-5cd28dc35002 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 14 20:25:18 embed-certs-158674 crio[883]: time="2025-10-14 20:25:18.190308469Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=9c0f4de8-ff0f-4b1c-bfea-cdf42ce44382 name=/runtime.v1.RuntimeService/ListContainers
	Oct 14 20:25:18 embed-certs-158674 crio[883]: time="2025-10-14 20:25:18.190570799Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=9c0f4de8-ff0f-4b1c-bfea-cdf42ce44382 name=/runtime.v1.RuntimeService/ListContainers
	Oct 14 20:25:18 embed-certs-158674 crio[883]: time="2025-10-14 20:25:18.191297573Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:485f6b335715e70a22f735bb8b4cf1fa6506ac46172580a4ffe008b729eb0f86,PodSandboxId:c6f965189f5cca674456d2c19a018311a12e0fd6c5258e7bc01d33b64fb1d20b,Metadata:&ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:6,},Image:&ImageSpec{Image:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,State:CONTAINER_EXITED,CreatedAt:1760473366425428323,Labels:map[string]string{io.kubernetes.container.name: dashboard-metrics-scraper,io.kubernetes.pod.name: dashboard-metrics-scraper-6ffb444bf9-mz5cm,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 30fef46b-43ef-4af7-b50b-ba3f07a7afde,},Annotations:map[string]string{io.kubernetes.container.hash: c78228a5,io.kubernetes.
container.ports: [{\"containerPort\":8000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 6,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:329ec9ece1ee87ff29d15aa2b946df57d3f4b3baa58d36ea71db147fb16de05e,PodSandboxId:cbee72f0353c9323b8de563661cfc4947729493bed80fb58b73774f6a414531d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1760472991749515369,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c51ba3c5-d480-40c6-80fe-a7b956740e03,},Annotations:map[
string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d94ac69e5f9049b9bfe47fec10578ac2f231a4654689654e974e89467fcb9dcb,PodSandboxId:62896cd3ba6e546c712996b4a3cc63cdaa829335e455030501caa64571b81b8a,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1760472971908454011,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f752530f-0f63-471b-86c5-be4cafc867f8,},Annotations:map[string]string{io.
kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:19cd1852378ae25b0274781540c1f41994de4f1cb92cff554fcccb073b4da86f,PodSandboxId:aceff373a9141797bd735c9ebbf347030abf7e9e92e30d523350db102706a413,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1760472968511607743,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-ct9rr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15723a94-e4e7-4bd3-95bd-f264ebed028b,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf
792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c43842bd6420c9d4f21a272e0491c1c894b300385fb8de9d017693e0cace060f,PodSandboxId:c6b15e6c26fa2d58b7ad8bcbbdf48acfa369f17c35d550af968ebb99696071bb,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1
a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1760472960836694070,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rh6wc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 914eea30-a7ad-442e-a8e2-ae0b47413336,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:003294c62d4e668811ca87e3130bdcfb3915e9bebe89b0c4f4ca79a82b670740,PodSandboxId:cbee72f0353c9323b8de563661cfc4947729493bed80fb58b73774f6a414531d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTA
INER_EXITED,CreatedAt:1760472960791138577,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c51ba3c5-d480-40c6-80fe-a7b956740e03,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c22c86b8a4ca8f03f41fa8b78491678bca1c8bbd07958f6836c9e5f3b0291e3,PodSandboxId:e1eb325c1e8e2388bf40ee0e7b70495b5b57ebdd439c9685ed18d900b5e0e46d,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:17604
72955974434671,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-158674,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ecd5998fe8fd7e24c8e5315c5cb0861b,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5ad28306f0acdab374f3c55d2549350c2887aa8919aad75177b3c04c6c780a4e,PodSandboxId:5c55a53d5d8e4e7829930e67c407e1ea346febb61b502891e58aff661fdb9f38,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7e
aae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1760472955967113869,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-158674,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8f768e14cd16d80179285f991605a4ba,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:760bc07e5c70450d801479d1c3dca4c1f4755d6173b8ade6dce7188b7cf47003,PodSandboxId:adeb9462d402a28252c44a46bf5dcfec10b9228692635ba1599a7e6dbc2f1745,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f
3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1760472955937234543,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-158674,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3b788033deaae79c31f8d0cb65deb34f,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1e686aedaa84ab231d58a6ab25ad7d3c53ad049d73f8339a8b04909f1dc8ced8,PodSandboxId:983272502e832541cb785f2cd1f38db8b2d20ccf383ab4ecd9df2ce933
be0578,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1760472955899422847,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-158674,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5eb45808bf0719072e2077942e6532db,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go
:74" id=9c0f4de8-ff0f-4b1c-bfea-cdf42ce44382 name=/runtime.v1.RuntimeService/ListContainers
	Oct 14 20:25:18 embed-certs-158674 crio[883]: time="2025-10-14 20:25:18.207766026Z" level=debug msg="Content-Type from manifest GET is \"application/json\"" file="docker/docker_client.go:964"
	Oct 14 20:25:18 embed-certs-158674 crio[883]: time="2025-10-14 20:25:18.207981769Z" level=debug msg="Error preparing image docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: fetching target platform image selected from manifest list: reading manifest sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" file="server/image_pull.go:213" id=9e2be8af-9f50-452d-8203-8742c25b8927 name=/runtime.v1.ImageService/PullImage
	Oct 14 20:25:18 embed-certs-158674 crio[883]: time="2025-10-14 20:25:18.208098735Z" level=debug msg="Response error: fetching target platform image selected from manifest list: reading manifest sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" file="otel-collector/interceptors.go:71" id=9e2be8af-9f50-452d-8203-8742c25b8927 name=/runtime.v1.ImageService/PullImage
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                        ATTEMPT             POD ID              POD
	485f6b335715e       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                      2 minutes ago       Exited              dashboard-metrics-scraper   6                   c6f965189f5cc       dashboard-metrics-scraper-6ffb444bf9-mz5cm
	329ec9ece1ee8       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      8 minutes ago       Running             storage-provisioner         2                   cbee72f0353c9       storage-provisioner
	d94ac69e5f904       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   9 minutes ago       Running             busybox                     1                   62896cd3ba6e5       busybox
	19cd1852378ae       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                      9 minutes ago       Running             coredns                     1                   aceff373a9141       coredns-66bc5c9577-ct9rr
	c43842bd6420c       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                      9 minutes ago       Running             kube-proxy                  1                   c6b15e6c26fa2       kube-proxy-rh6wc
	003294c62d4e6       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      9 minutes ago       Exited              storage-provisioner         1                   cbee72f0353c9       storage-provisioner
	7c22c86b8a4ca       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                      9 minutes ago       Running             etcd                        1                   e1eb325c1e8e2       etcd-embed-certs-158674
	5ad28306f0acd       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                      9 minutes ago       Running             kube-scheduler              1                   5c55a53d5d8e4       kube-scheduler-embed-certs-158674
	760bc07e5c704       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                      9 minutes ago       Running             kube-controller-manager     1                   adeb9462d402a       kube-controller-manager-embed-certs-158674
	1e686aedaa84a       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                      9 minutes ago       Running             kube-apiserver              1                   983272502e832       kube-apiserver-embed-certs-158674
	
	
	==> coredns [19cd1852378ae25b0274781540c1f41994de4f1cb92cff554fcccb073b4da86f] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6e77f21cd6946b87ec86c565e2060aa5d23c02882cb22fd7a321b5e8cd0c8bdafe21968fcff406405707b988b753da21ecd190fe02329f1b569bfa74920cc0fa
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:46602 - 52436 "HINFO IN 2360683752022516549.7876052886551378901. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.09399007s
	
	
	==> describe nodes <==
	Name:               embed-certs-158674
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-158674
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=93e78e130b0f4e4236c23940daf4ba8b68c76662
	                    minikube.k8s.io/name=embed-certs-158674
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_14T20_13_03_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 14 Oct 2025 20:12:59 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-158674
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 14 Oct 2025 20:25:09 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 14 Oct 2025 20:21:47 +0000   Tue, 14 Oct 2025 20:12:58 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 14 Oct 2025 20:21:47 +0000   Tue, 14 Oct 2025 20:12:58 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 14 Oct 2025 20:21:47 +0000   Tue, 14 Oct 2025 20:12:58 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 14 Oct 2025 20:21:47 +0000   Tue, 14 Oct 2025 20:16:10 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.78
	  Hostname:    embed-certs-158674
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3042712Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3042712Ki
	  pods:               110
	System Info:
	  Machine ID:                 fc3d3099291b42e0a6671211eb8f48e0
	  System UUID:                fc3d3099-291b-42e0-a667-1211eb8f48e0
	  Boot ID:                    74f9463f-48fa-45b6-9fc9-8e2fee57a938
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 coredns-66bc5c9577-ct9rr                      100m (5%)     0 (0%)      70Mi (2%)        170Mi (5%)     12m
	  kube-system                 etcd-embed-certs-158674                       100m (5%)     0 (0%)      100Mi (3%)       0 (0%)         12m
	  kube-system                 kube-apiserver-embed-certs-158674             250m (12%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-controller-manager-embed-certs-158674    200m (10%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-proxy-rh6wc                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-scheduler-embed-certs-158674             100m (5%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 metrics-server-746fcd58dc-rbchd               100m (5%)     0 (0%)      200Mi (6%)       0 (0%)         11m
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-mz5cm    0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m13s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-lhkkm         0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m13s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   0 (0%)
	  memory             370Mi (12%)  170Mi (5%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 12m                    kube-proxy       
	  Normal   Starting                 9m16s                  kube-proxy       
	  Normal   Starting                 12m                    kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  12m (x8 over 12m)      kubelet          Node embed-certs-158674 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    12m (x8 over 12m)      kubelet          Node embed-certs-158674 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     12m (x7 over 12m)      kubelet          Node embed-certs-158674 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  12m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasNoDiskPressure    12m                    kubelet          Node embed-certs-158674 status is now: NodeHasNoDiskPressure
	  Normal   NodeAllocatableEnforced  12m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  12m                    kubelet          Node embed-certs-158674 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     12m                    kubelet          Node embed-certs-158674 status is now: NodeHasSufficientPID
	  Normal   NodeReady                12m                    kubelet          Node embed-certs-158674 status is now: NodeReady
	  Normal   Starting                 12m                    kubelet          Starting kubelet.
	  Normal   RegisteredNode           12m                    node-controller  Node embed-certs-158674 event: Registered Node embed-certs-158674 in Controller
	  Normal   Starting                 9m24s                  kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  9m24s (x8 over 9m24s)  kubelet          Node embed-certs-158674 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    9m24s (x8 over 9m24s)  kubelet          Node embed-certs-158674 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     9m24s (x7 over 9m24s)  kubelet          Node embed-certs-158674 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  9m24s                  kubelet          Updated Node Allocatable limit across pods
	  Warning  Rebooted                 9m18s                  kubelet          Node embed-certs-158674 has been rebooted, boot id: 74f9463f-48fa-45b6-9fc9-8e2fee57a938
	  Normal   RegisteredNode           9m14s                  node-controller  Node embed-certs-158674 event: Registered Node embed-certs-158674 in Controller
	
	
	==> dmesg <==
	[Oct14 20:15] Booted with the nomodeset parameter. Only the system framebuffer will be available
	[  +0.000011] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.001676] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +0.001133] (rpcbind)[120]: rpcbind.service: Referenced but unset environment variable evaluates to an empty string: RPCBIND_OPTIONS
	[  +0.791244] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000018] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.122025] kauditd_printk_skb: 60 callbacks suppressed
	[  +0.104808] kauditd_printk_skb: 46 callbacks suppressed
	[Oct14 20:16] kauditd_printk_skb: 168 callbacks suppressed
	[  +2.758267] kauditd_printk_skb: 128 callbacks suppressed
	[  +0.000035] kauditd_printk_skb: 149 callbacks suppressed
	[ +18.333556] kauditd_printk_skb: 78 callbacks suppressed
	[ +12.032494] kauditd_printk_skb: 5 callbacks suppressed
	[  +1.120158] kauditd_printk_skb: 32 callbacks suppressed
	[Oct14 20:17] kauditd_printk_skb: 13 callbacks suppressed
	[ +34.932103] kauditd_printk_skb: 6 callbacks suppressed
	[Oct14 20:18] kauditd_printk_skb: 6 callbacks suppressed
	[Oct14 20:19] kauditd_printk_skb: 6 callbacks suppressed
	[Oct14 20:22] kauditd_printk_skb: 6 callbacks suppressed
	
	
	==> etcd [7c22c86b8a4ca8f03f41fa8b78491678bca1c8bbd07958f6836c9e5f3b0291e3] <==
	{"level":"info","ts":"2025-10-14T20:16:15.980064Z","caller":"traceutil/trace.go:172","msg":"trace[98081159] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:702; }","duration":"572.481696ms","start":"2025-10-14T20:16:15.407573Z","end":"2025-10-14T20:16:15.980055Z","steps":["trace[98081159] 'agreement among raft nodes before linearized reading'  (duration: 572.276906ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-14T20:16:15.980228Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"267.985316ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-14T20:16:15.980260Z","caller":"traceutil/trace.go:172","msg":"trace[770747873] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:702; }","duration":"268.044297ms","start":"2025-10-14T20:16:15.712207Z","end":"2025-10-14T20:16:15.980251Z","steps":["trace[770747873] 'agreement among raft nodes before linearized reading'  (duration: 267.965338ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-14T20:16:15.979906Z","caller":"traceutil/trace.go:172","msg":"trace[489042868] transaction","detail":"{read_only:false; response_revision:702; number_of_response:1; }","duration":"1.325959731s","start":"2025-10-14T20:16:14.653905Z","end":"2025-10-14T20:16:15.979865Z","steps":["trace[489042868] 'process raft request'  (duration: 1.321260662s)"],"step_count":1}
	{"level":"warn","ts":"2025-10-14T20:16:15.980596Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-10-14T20:16:14.653884Z","time spent":"1.326632358s","remote":"127.0.0.1:56754","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":4012,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/pods/kube-system/storage-provisioner\" mod_revision:575 > success:<request_put:<key:\"/registry/pods/kube-system/storage-provisioner\" value_size:3958 >> failure:<request_range:<key:\"/registry/pods/kube-system/storage-provisioner\" > >"}
	{"level":"warn","ts":"2025-10-14T20:16:15.980760Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"566.366145ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kube-proxy-rh6wc\" limit:1 ","response":"range_response_count:1 size:5006"}
	{"level":"info","ts":"2025-10-14T20:16:15.980789Z","caller":"traceutil/trace.go:172","msg":"trace[901854029] range","detail":"{range_begin:/registry/pods/kube-system/kube-proxy-rh6wc; range_end:; response_count:1; response_revision:702; }","duration":"566.400931ms","start":"2025-10-14T20:16:15.414381Z","end":"2025-10-14T20:16:15.980782Z","steps":["trace[901854029] 'agreement among raft nodes before linearized reading'  (duration: 566.066733ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-14T20:16:15.980814Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-10-14T20:16:15.414368Z","time spent":"566.440246ms","remote":"127.0.0.1:56754","response type":"/etcdserverpb.KV/Range","request count":0,"request size":47,"response count":1,"response size":5028,"request content":"key:\"/registry/pods/kube-system/kube-proxy-rh6wc\" limit:1 "}
	{"level":"warn","ts":"2025-10-14T20:16:16.492150Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"394.68349ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-14T20:16:16.492292Z","caller":"traceutil/trace.go:172","msg":"trace[2046295095] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:702; }","duration":"394.764689ms","start":"2025-10-14T20:16:16.097442Z","end":"2025-10-14T20:16:16.492206Z","steps":["trace[2046295095] 'range keys from in-memory index tree'  (duration: 394.581216ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-14T20:16:16.492343Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-10-14T20:16:16.097426Z","time spent":"394.902369ms","remote":"127.0.0.1:56424","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":27,"request content":"key:\"/registry/health\" "}
	{"level":"info","ts":"2025-10-14T20:17:07.621730Z","caller":"traceutil/trace.go:172","msg":"trace[476807296] linearizableReadLoop","detail":"{readStateIndex:822; appliedIndex:822; }","duration":"144.505606ms","start":"2025-10-14T20:17:07.477185Z","end":"2025-10-14T20:17:07.621690Z","steps":["trace[476807296] 'read index received'  (duration: 144.494662ms)","trace[476807296] 'applied index is now lower than readState.Index'  (duration: 9.281µs)"],"step_count":2}
	{"level":"warn","ts":"2025-10-14T20:17:07.703631Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"226.363852ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/deviceclasses\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"warn","ts":"2025-10-14T20:17:07.703632Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"193.950167ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" limit:1 ","response":"range_response_count:1 size:1118"}
	{"level":"info","ts":"2025-10-14T20:17:07.703707Z","caller":"traceutil/trace.go:172","msg":"trace[637731438] range","detail":"{range_begin:/registry/deviceclasses; range_end:; response_count:0; response_revision:769; }","duration":"226.536096ms","start":"2025-10-14T20:17:07.477156Z","end":"2025-10-14T20:17:07.703692Z","steps":["trace[637731438] 'agreement among raft nodes before linearized reading'  (duration: 144.718588ms)","trace[637731438] 'range keys from in-memory index tree'  (duration: 81.573948ms)"],"step_count":2}
	{"level":"info","ts":"2025-10-14T20:17:07.703750Z","caller":"traceutil/trace.go:172","msg":"trace[300380464] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:769; }","duration":"194.082801ms","start":"2025-10-14T20:17:07.509635Z","end":"2025-10-14T20:17:07.703718Z","steps":["trace[300380464] 'agreement among raft nodes before linearized reading'  (duration: 193.851517ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-14T20:17:25.661457Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"200.660479ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/daemonsets\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-14T20:17:25.662115Z","caller":"traceutil/trace.go:172","msg":"trace[863305639] range","detail":"{range_begin:/registry/daemonsets; range_end:; response_count:0; response_revision:789; }","duration":"201.331327ms","start":"2025-10-14T20:17:25.460771Z","end":"2025-10-14T20:17:25.662102Z","steps":["trace[863305639] 'range keys from in-memory index tree'  (duration: 200.593695ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-14T20:17:26.071049Z","caller":"traceutil/trace.go:172","msg":"trace[568462179] linearizableReadLoop","detail":"{readStateIndex:846; appliedIndex:846; }","duration":"156.459804ms","start":"2025-10-14T20:17:25.914567Z","end":"2025-10-14T20:17:26.071027Z","steps":["trace[568462179] 'read index received'  (duration: 156.453449ms)","trace[568462179] 'applied index is now lower than readState.Index'  (duration: 5.25µs)"],"step_count":2}
	{"level":"info","ts":"2025-10-14T20:17:26.071492Z","caller":"traceutil/trace.go:172","msg":"trace[721736249] transaction","detail":"{read_only:false; response_revision:790; number_of_response:1; }","duration":"244.727624ms","start":"2025-10-14T20:17:25.826750Z","end":"2025-10-14T20:17:26.071478Z","steps":["trace[721736249] 'process raft request'  (duration: 244.613551ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-14T20:17:26.072642Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"101.642916ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/flowschemas\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-14T20:17:26.072798Z","caller":"traceutil/trace.go:172","msg":"trace[1690231268] range","detail":"{range_begin:/registry/flowschemas; range_end:; response_count:0; response_revision:790; }","duration":"101.954334ms","start":"2025-10-14T20:17:25.970831Z","end":"2025-10-14T20:17:26.072786Z","steps":["trace[1690231268] 'agreement among raft nodes before linearized reading'  (duration: 101.618495ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-14T20:17:26.074196Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"156.793051ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/controllers\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-14T20:17:26.074900Z","caller":"traceutil/trace.go:172","msg":"trace[1848611105] range","detail":"{range_begin:/registry/controllers; range_end:; response_count:0; response_revision:789; }","duration":"160.326012ms","start":"2025-10-14T20:17:25.914561Z","end":"2025-10-14T20:17:26.074887Z","steps":["trace[1848611105] 'agreement among raft nodes before linearized reading'  (duration: 156.753436ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-14T20:18:20.803355Z","caller":"traceutil/trace.go:172","msg":"trace[705878479] transaction","detail":"{read_only:false; response_revision:857; number_of_response:1; }","duration":"282.105022ms","start":"2025-10-14T20:18:20.521191Z","end":"2025-10-14T20:18:20.803296Z","steps":["trace[705878479] 'process raft request'  (duration: 281.981069ms)"],"step_count":1}
	
	
	==> kernel <==
	 20:25:18 up 9 min,  0 users,  load average: 0.29, 0.39, 0.24
	Linux embed-certs-158674 6.6.95 #1 SMP PREEMPT_DYNAMIC Thu Sep 18 15:48:18 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [1e686aedaa84ab231d58a6ab25ad7d3c53ad049d73f8339a8b04909f1dc8ced8] <==
	I1014 20:21:00.999505       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1014 20:21:00.999430       1 handler_proxy.go:99] no RequestInfo found in the context
	E1014 20:21:00.999570       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1014 20:21:01.000596       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1014 20:22:00.999990       1 handler_proxy.go:99] no RequestInfo found in the context
	E1014 20:22:01.000062       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I1014 20:22:01.000075       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1014 20:22:01.001107       1 handler_proxy.go:99] no RequestInfo found in the context
	E1014 20:22:01.001187       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1014 20:22:01.001201       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1014 20:24:01.001261       1 handler_proxy.go:99] no RequestInfo found in the context
	E1014 20:24:01.001373       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I1014 20:24:01.001391       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1014 20:24:01.001283       1 handler_proxy.go:99] no RequestInfo found in the context
	E1014 20:24:01.001454       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1014 20:24:01.002633       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [760bc07e5c70450d801479d1c3dca4c1f4755d6173b8ade6dce7188b7cf47003] <==
	I1014 20:19:04.673158       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1014 20:19:34.560467       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1014 20:19:34.681684       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1014 20:20:04.566115       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1014 20:20:04.691045       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1014 20:20:34.571588       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1014 20:20:34.699871       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1014 20:21:04.579424       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1014 20:21:04.708396       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1014 20:21:34.587327       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1014 20:21:34.716908       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1014 20:22:04.594040       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1014 20:22:04.725787       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1014 20:22:34.601690       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1014 20:22:34.734767       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1014 20:23:04.609711       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1014 20:23:04.743648       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1014 20:23:34.616682       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1014 20:23:34.752413       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1014 20:24:04.623060       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1014 20:24:04.760494       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1014 20:24:34.628512       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1014 20:24:34.767335       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1014 20:25:04.635252       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1014 20:25:04.774996       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	
	
	==> kube-proxy [c43842bd6420c9d4f21a272e0491c1c894b300385fb8de9d017693e0cace060f] <==
	I1014 20:16:01.363108       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1014 20:16:01.464278       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1014 20:16:01.464315       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.50.78"]
	E1014 20:16:01.464567       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1014 20:16:01.501979       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1014 20:16:01.502064       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1014 20:16:01.502093       1 server_linux.go:132] "Using iptables Proxier"
	I1014 20:16:01.511630       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1014 20:16:01.512349       1 server.go:527] "Version info" version="v1.34.1"
	I1014 20:16:01.512401       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1014 20:16:01.519602       1 config.go:200] "Starting service config controller"
	I1014 20:16:01.519619       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1014 20:16:01.519762       1 config.go:106] "Starting endpoint slice config controller"
	I1014 20:16:01.519767       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1014 20:16:01.519891       1 config.go:403] "Starting serviceCIDR config controller"
	I1014 20:16:01.519895       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1014 20:16:01.523036       1 config.go:309] "Starting node config controller"
	I1014 20:16:01.524069       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1014 20:16:01.524226       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1014 20:16:01.619859       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1014 20:16:01.620059       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1014 20:16:01.620144       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [5ad28306f0acdab374f3c55d2549350c2887aa8919aad75177b3c04c6c780a4e] <==
	I1014 20:15:58.449475       1 serving.go:386] Generated self-signed cert in-memory
	I1014 20:16:00.098228       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1014 20:16:00.098300       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1014 20:16:00.109790       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1014 20:16:00.111004       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1014 20:16:00.114656       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1014 20:16:00.111021       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1014 20:16:00.114772       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1014 20:16:00.111038       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1014 20:16:00.110967       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1014 20:16:00.115405       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1014 20:16:00.215663       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1014 20:16:00.216011       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1014 20:16:00.216410       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	
	
	==> kubelet <==
	Oct 14 20:24:34 embed-certs-158674 kubelet[1212]: E1014 20:24:34.631027    1212 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1760473474630633222  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:176956}  inodes_used:{value:65}}"
	Oct 14 20:24:35 embed-certs-158674 kubelet[1212]: E1014 20:24:35.409069    1212 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-746fcd58dc-rbchd" podUID="22af9765-88b5-40e9-886d-a9ed5c464bb5"
	Oct 14 20:24:38 embed-certs-158674 kubelet[1212]: I1014 20:24:38.409694    1212 scope.go:117] "RemoveContainer" containerID="485f6b335715e70a22f735bb8b4cf1fa6506ac46172580a4ffe008b729eb0f86"
	Oct 14 20:24:38 embed-certs-158674 kubelet[1212]: E1014 20:24:38.409852    1212 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-mz5cm_kubernetes-dashboard(30fef46b-43ef-4af7-b50b-ba3f07a7afde)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-mz5cm" podUID="30fef46b-43ef-4af7-b50b-ba3f07a7afde"
	Oct 14 20:24:44 embed-certs-158674 kubelet[1212]: E1014 20:24:44.632973    1212 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1760473484632576016  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:176956}  inodes_used:{value:65}}"
	Oct 14 20:24:44 embed-certs-158674 kubelet[1212]: E1014 20:24:44.633020    1212 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1760473484632576016  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:176956}  inodes_used:{value:65}}"
	Oct 14 20:24:50 embed-certs-158674 kubelet[1212]: E1014 20:24:50.409056    1212 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-746fcd58dc-rbchd" podUID="22af9765-88b5-40e9-886d-a9ed5c464bb5"
	Oct 14 20:24:51 embed-certs-158674 kubelet[1212]: I1014 20:24:51.407448    1212 scope.go:117] "RemoveContainer" containerID="485f6b335715e70a22f735bb8b4cf1fa6506ac46172580a4ffe008b729eb0f86"
	Oct 14 20:24:51 embed-certs-158674 kubelet[1212]: E1014 20:24:51.407627    1212 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-mz5cm_kubernetes-dashboard(30fef46b-43ef-4af7-b50b-ba3f07a7afde)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-mz5cm" podUID="30fef46b-43ef-4af7-b50b-ba3f07a7afde"
	Oct 14 20:24:54 embed-certs-158674 kubelet[1212]: E1014 20:24:54.634793    1212 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1760473494634463233  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:176956}  inodes_used:{value:65}}"
	Oct 14 20:24:54 embed-certs-158674 kubelet[1212]: E1014 20:24:54.634833    1212 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1760473494634463233  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:176956}  inodes_used:{value:65}}"
	Oct 14 20:25:04 embed-certs-158674 kubelet[1212]: E1014 20:25:04.636770    1212 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1760473504636508605  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:176956}  inodes_used:{value:65}}"
	Oct 14 20:25:04 embed-certs-158674 kubelet[1212]: E1014 20:25:04.636810    1212 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1760473504636508605  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:176956}  inodes_used:{value:65}}"
	Oct 14 20:25:05 embed-certs-158674 kubelet[1212]: I1014 20:25:05.407552    1212 scope.go:117] "RemoveContainer" containerID="485f6b335715e70a22f735bb8b4cf1fa6506ac46172580a4ffe008b729eb0f86"
	Oct 14 20:25:05 embed-certs-158674 kubelet[1212]: E1014 20:25:05.407718    1212 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-mz5cm_kubernetes-dashboard(30fef46b-43ef-4af7-b50b-ba3f07a7afde)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-mz5cm" podUID="30fef46b-43ef-4af7-b50b-ba3f07a7afde"
	Oct 14 20:25:05 embed-certs-158674 kubelet[1212]: E1014 20:25:05.409769    1212 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-746fcd58dc-rbchd" podUID="22af9765-88b5-40e9-886d-a9ed5c464bb5"
	Oct 14 20:25:14 embed-certs-158674 kubelet[1212]: E1014 20:25:14.639081    1212 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1760473514638750588  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:176956}  inodes_used:{value:65}}"
	Oct 14 20:25:14 embed-certs-158674 kubelet[1212]: E1014 20:25:14.639118    1212 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1760473514638750588  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:176956}  inodes_used:{value:65}}"
	Oct 14 20:25:16 embed-certs-158674 kubelet[1212]: I1014 20:25:16.406753    1212 scope.go:117] "RemoveContainer" containerID="485f6b335715e70a22f735bb8b4cf1fa6506ac46172580a4ffe008b729eb0f86"
	Oct 14 20:25:16 embed-certs-158674 kubelet[1212]: E1014 20:25:16.406898    1212 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-mz5cm_kubernetes-dashboard(30fef46b-43ef-4af7-b50b-ba3f07a7afde)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-mz5cm" podUID="30fef46b-43ef-4af7-b50b-ba3f07a7afde"
	Oct 14 20:25:16 embed-certs-158674 kubelet[1212]: E1014 20:25:16.410206    1212 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-746fcd58dc-rbchd" podUID="22af9765-88b5-40e9-886d-a9ed5c464bb5"
	Oct 14 20:25:18 embed-certs-158674 kubelet[1212]: E1014 20:25:18.208480    1212 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = fetching target platform image selected from manifest list: reading manifest sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
	Oct 14 20:25:18 embed-certs-158674 kubelet[1212]: E1014 20:25:18.208549    1212 kuberuntime_image.go:43] "Failed to pull image" err="fetching target platform image selected from manifest list: reading manifest sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
	Oct 14 20:25:18 embed-certs-158674 kubelet[1212]: E1014 20:25:18.208627    1212 kuberuntime_manager.go:1449] "Unhandled Error" err="container kubernetes-dashboard start failed in pod kubernetes-dashboard-855c9754f9-lhkkm_kubernetes-dashboard(11c1df79-7653-4919-a97e-456c684eec60): ErrImagePull: fetching target platform image selected from manifest list: reading manifest sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Oct 14 20:25:18 embed-certs-158674 kubelet[1212]: E1014 20:25:18.208669    1212 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ErrImagePull: \"fetching target platform image selected from manifest list: reading manifest sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-lhkkm" podUID="11c1df79-7653-4919-a97e-456c684eec60"
	
	
	==> storage-provisioner [003294c62d4e668811ca87e3130bdcfb3915e9bebe89b0c4f4ca79a82b670740] <==
	I1014 20:16:01.122238       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1014 20:16:31.141250       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [329ec9ece1ee87ff29d15aa2b946df57d3f4b3baa58d36ea71db147fb16de05e] <==
	W1014 20:24:52.909368       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1014 20:24:54.913389       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1014 20:24:54.918611       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1014 20:24:56.923756       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1014 20:24:56.929038       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1014 20:24:58.933066       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1014 20:24:58.941426       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1014 20:25:00.945539       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1014 20:25:00.951434       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1014 20:25:02.955802       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1014 20:25:02.961507       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1014 20:25:04.965669       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1014 20:25:04.971611       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1014 20:25:06.975605       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1014 20:25:06.987889       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1014 20:25:08.991084       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1014 20:25:08.997009       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1014 20:25:11.000240       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1014 20:25:11.008912       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1014 20:25:13.012504       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1014 20:25:13.018118       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1014 20:25:15.021699       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1014 20:25:15.030906       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1014 20:25:17.036572       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1014 20:25:17.043119       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-158674 -n embed-certs-158674
helpers_test.go:269: (dbg) Run:  kubectl --context embed-certs-158674 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: metrics-server-746fcd58dc-rbchd kubernetes-dashboard-855c9754f9-lhkkm
helpers_test.go:282: ======> post-mortem[TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context embed-certs-158674 describe pod metrics-server-746fcd58dc-rbchd kubernetes-dashboard-855c9754f9-lhkkm
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context embed-certs-158674 describe pod metrics-server-746fcd58dc-rbchd kubernetes-dashboard-855c9754f9-lhkkm: exit status 1 (60.401564ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-746fcd58dc-rbchd" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-lhkkm" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context embed-certs-158674 describe pod metrics-server-746fcd58dc-rbchd kubernetes-dashboard-855c9754f9-lhkkm: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (542.63s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (542.7s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-lhkkm" [11c1df79-7653-4919-a97e-456c684eec60] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
E1014 20:25:40.167407  368634 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/bridge-880673/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1014 20:25:42.799987  368634 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/auto-880673/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1014 20:25:54.971582  368634 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/default-k8s-diff-port-062000/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1014 20:26:08.091356  368634 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/enable-default-cni-880673/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1014 20:26:20.451957  368634 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/kindnet-880673/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1014 20:26:22.790606  368634 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/functional-416610/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1014 20:26:32.608464  368634 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/flannel-880673/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1014 20:26:35.270426  368634 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/calico-880673/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1014 20:26:42.187707  368634 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/no-preload-905132/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1014 20:26:48.153618  368634 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/kindnet-880673/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1014 20:27:01.050852  368634 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/custom-flannel-880673/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1014 20:27:02.089684  368634 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/bridge-880673/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1014 20:27:02.972465  368634 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/calico-880673/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1014 20:27:28.755911  368634 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/custom-flannel-880673/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1014 20:28:24.229561  368634 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/enable-default-cni-880673/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1014 20:28:26.583046  368634 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/addons-082251/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1014 20:28:48.747657  368634 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/flannel-880673/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1014 20:28:51.932821  368634 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/enable-default-cni-880673/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1014 20:29:16.450504  368634 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/flannel-880673/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1014 20:29:18.229413  368634 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/bridge-880673/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1014 20:29:45.931447  368634 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/bridge-880673/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1014 20:30:14.408642  368634 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/old-k8s-version-652371/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1014 20:30:15.097348  368634 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/auto-880673/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1014 20:30:54.971404  368634 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/default-k8s-diff-port-062000/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1014 20:31:20.451648  368634 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/kindnet-880673/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1014 20:31:22.791017  368634 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/functional-416610/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1014 20:31:35.269861  368634 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/calico-880673/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1014 20:31:37.473143  368634 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/old-k8s-version-652371/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1014 20:31:42.187999  368634 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/no-preload-905132/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1014 20:32:01.051634  368634 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/custom-flannel-880673/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1014 20:32:18.038554  368634 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/default-k8s-diff-port-062000/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1014 20:32:45.864632  368634 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/functional-416610/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1014 20:33:05.254813  368634 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/no-preload-905132/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1014 20:33:24.229520  368634 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/enable-default-cni-880673/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1014 20:33:26.583119  368634 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/addons-082251/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1014 20:33:48.746861  368634 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/flannel-880673/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1014 20:34:18.230343  368634 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/bridge-880673/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:285: ***** TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:285: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-158674 -n embed-certs-158674
start_stop_delete_test.go:285: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: showing logs for failed pods as of 2025-10-14 20:34:19.720165415 +0000 UTC m=+5044.186311192
start_stop_delete_test.go:285: (dbg) Run:  kubectl --context embed-certs-158674 describe po kubernetes-dashboard-855c9754f9-lhkkm -n kubernetes-dashboard
start_stop_delete_test.go:285: (dbg) kubectl --context embed-certs-158674 describe po kubernetes-dashboard-855c9754f9-lhkkm -n kubernetes-dashboard:
Name:             kubernetes-dashboard-855c9754f9-lhkkm
Namespace:        kubernetes-dashboard
Priority:         0
Service Account:  kubernetes-dashboard
Node:             embed-certs-158674/192.168.50.78
Start Time:       Tue, 14 Oct 2025 20:16:10 +0000
Labels:           gcp-auth-skip-secret=true
k8s-app=kubernetes-dashboard
pod-template-hash=855c9754f9
Annotations:      <none>
Status:           Pending
IP:               10.244.0.9
IPs:
IP:           10.244.0.9
Controlled By:  ReplicaSet/kubernetes-dashboard-855c9754f9
Containers:
kubernetes-dashboard:
Container ID:  
Image:         docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
Image ID:      
Port:          9090/TCP
Host Port:     0/TCP
Args:
--namespace=kubernetes-dashboard
--enable-skip-login
--disable-settings-authorizer
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Liveness:       http-get http://:9090/ delay=30s timeout=30s period=10s #success=1 #failure=3
Environment:    <none>
Mounts:
/tmp from tmp-volume (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-85s8k (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
tmp-volume:
Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:     
SizeLimit:  <unset>
kube-api-access-85s8k:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              kubernetes.io/os=linux
Tolerations:                 node-role.kubernetes.io/master:NoSchedule
node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason            Age                   From               Message
----     ------            ----                  ----               -------
Warning  FailedScheduling  18m                   default-scheduler  0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.
Normal   Scheduled         18m                   default-scheduler  Successfully assigned kubernetes-dashboard/kubernetes-dashboard-855c9754f9-lhkkm to embed-certs-158674
Warning  Failed            14m (x2 over 16m)     kubelet            Failed to pull image "docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93": reading manifest sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Normal   Pulling           12m (x5 over 18m)     kubelet            Pulling image "docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
Warning  Failed            12m (x3 over 17m)     kubelet            Failed to pull image "docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93": fetching target platform image selected from manifest list: reading manifest sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed            12m (x5 over 17m)     kubelet            Error: ErrImagePull
Normal   BackOff           3m7s (x48 over 17m)   kubelet            Back-off pulling image "docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
Warning  Failed            2m28s (x51 over 17m)  kubelet            Error: ImagePullBackOff
start_stop_delete_test.go:285: (dbg) Run:  kubectl --context embed-certs-158674 logs kubernetes-dashboard-855c9754f9-lhkkm -n kubernetes-dashboard
start_stop_delete_test.go:285: (dbg) Non-zero exit: kubectl --context embed-certs-158674 logs kubernetes-dashboard-855c9754f9-lhkkm -n kubernetes-dashboard: exit status 1 (77.488394ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "kubernetes-dashboard" in pod "kubernetes-dashboard-855c9754f9-lhkkm" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
start_stop_delete_test.go:285: kubectl --context embed-certs-158674 logs kubernetes-dashboard-855c9754f9-lhkkm -n kubernetes-dashboard: exit status 1
start_stop_delete_test.go:286: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-158674 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/embed-certs/serial/AddonExistsAfterStop]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-158674 -n embed-certs-158674
helpers_test.go:252: <<< TestStartStop/group/embed-certs/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/embed-certs/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-158674 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-158674 logs -n 25: (1.313063324s)
helpers_test.go:260: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────┬───────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                      ARGS                                      │    PROFILE    │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────┼───────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p bridge-880673 sudo iptables -t nat -L -n -v                                 │ bridge-880673 │ jenkins │ v1.37.0 │ 14 Oct 25 20:19 UTC │ 14 Oct 25 20:19 UTC │
	│ ssh     │ -p bridge-880673 sudo systemctl status kubelet --all --full --no-pager         │ bridge-880673 │ jenkins │ v1.37.0 │ 14 Oct 25 20:19 UTC │ 14 Oct 25 20:19 UTC │
	│ ssh     │ -p bridge-880673 sudo systemctl cat kubelet --no-pager                         │ bridge-880673 │ jenkins │ v1.37.0 │ 14 Oct 25 20:19 UTC │ 14 Oct 25 20:19 UTC │
	│ ssh     │ -p bridge-880673 sudo journalctl -xeu kubelet --all --full --no-pager          │ bridge-880673 │ jenkins │ v1.37.0 │ 14 Oct 25 20:19 UTC │ 14 Oct 25 20:19 UTC │
	│ ssh     │ -p bridge-880673 sudo cat /etc/kubernetes/kubelet.conf                         │ bridge-880673 │ jenkins │ v1.37.0 │ 14 Oct 25 20:19 UTC │ 14 Oct 25 20:19 UTC │
	│ ssh     │ -p bridge-880673 sudo cat /var/lib/kubelet/config.yaml                         │ bridge-880673 │ jenkins │ v1.37.0 │ 14 Oct 25 20:19 UTC │ 14 Oct 25 20:19 UTC │
	│ ssh     │ -p bridge-880673 sudo systemctl status docker --all --full --no-pager          │ bridge-880673 │ jenkins │ v1.37.0 │ 14 Oct 25 20:19 UTC │                     │
	│ ssh     │ -p bridge-880673 sudo systemctl cat docker --no-pager                          │ bridge-880673 │ jenkins │ v1.37.0 │ 14 Oct 25 20:19 UTC │ 14 Oct 25 20:19 UTC │
	│ ssh     │ -p bridge-880673 sudo cat /etc/docker/daemon.json                              │ bridge-880673 │ jenkins │ v1.37.0 │ 14 Oct 25 20:19 UTC │ 14 Oct 25 20:19 UTC │
	│ ssh     │ -p bridge-880673 sudo docker system info                                       │ bridge-880673 │ jenkins │ v1.37.0 │ 14 Oct 25 20:19 UTC │                     │
	│ ssh     │ -p bridge-880673 sudo systemctl status cri-docker --all --full --no-pager      │ bridge-880673 │ jenkins │ v1.37.0 │ 14 Oct 25 20:19 UTC │                     │
	│ ssh     │ -p bridge-880673 sudo systemctl cat cri-docker --no-pager                      │ bridge-880673 │ jenkins │ v1.37.0 │ 14 Oct 25 20:19 UTC │ 14 Oct 25 20:19 UTC │
	│ ssh     │ -p bridge-880673 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf │ bridge-880673 │ jenkins │ v1.37.0 │ 14 Oct 25 20:19 UTC │                     │
	│ ssh     │ -p bridge-880673 sudo cat /usr/lib/systemd/system/cri-docker.service           │ bridge-880673 │ jenkins │ v1.37.0 │ 14 Oct 25 20:19 UTC │ 14 Oct 25 20:19 UTC │
	│ ssh     │ -p bridge-880673 sudo cri-dockerd --version                                    │ bridge-880673 │ jenkins │ v1.37.0 │ 14 Oct 25 20:19 UTC │ 14 Oct 25 20:19 UTC │
	│ ssh     │ -p bridge-880673 sudo systemctl status containerd --all --full --no-pager      │ bridge-880673 │ jenkins │ v1.37.0 │ 14 Oct 25 20:19 UTC │                     │
	│ ssh     │ -p bridge-880673 sudo systemctl cat containerd --no-pager                      │ bridge-880673 │ jenkins │ v1.37.0 │ 14 Oct 25 20:19 UTC │ 14 Oct 25 20:19 UTC │
	│ ssh     │ -p bridge-880673 sudo cat /lib/systemd/system/containerd.service               │ bridge-880673 │ jenkins │ v1.37.0 │ 14 Oct 25 20:19 UTC │ 14 Oct 25 20:19 UTC │
	│ ssh     │ -p bridge-880673 sudo cat /etc/containerd/config.toml                          │ bridge-880673 │ jenkins │ v1.37.0 │ 14 Oct 25 20:19 UTC │ 14 Oct 25 20:19 UTC │
	│ ssh     │ -p bridge-880673 sudo containerd config dump                                   │ bridge-880673 │ jenkins │ v1.37.0 │ 14 Oct 25 20:19 UTC │ 14 Oct 25 20:19 UTC │
	│ ssh     │ -p bridge-880673 sudo systemctl status crio --all --full --no-pager            │ bridge-880673 │ jenkins │ v1.37.0 │ 14 Oct 25 20:19 UTC │ 14 Oct 25 20:19 UTC │
	│ ssh     │ -p bridge-880673 sudo systemctl cat crio --no-pager                            │ bridge-880673 │ jenkins │ v1.37.0 │ 14 Oct 25 20:19 UTC │ 14 Oct 25 20:19 UTC │
	│ ssh     │ -p bridge-880673 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;  │ bridge-880673 │ jenkins │ v1.37.0 │ 14 Oct 25 20:19 UTC │ 14 Oct 25 20:19 UTC │
	│ ssh     │ -p bridge-880673 sudo crio config                                              │ bridge-880673 │ jenkins │ v1.37.0 │ 14 Oct 25 20:19 UTC │ 14 Oct 25 20:19 UTC │
	│ delete  │ -p bridge-880673                                                               │ bridge-880673 │ jenkins │ v1.37.0 │ 14 Oct 25 20:19 UTC │ 14 Oct 25 20:19 UTC │
	└─────────┴────────────────────────────────────────────────────────────────────────────────┴───────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/14 20:17:32
	Running on machine: ubuntu-20-agent-6
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1014 20:17:32.989439  421402 out.go:360] Setting OutFile to fd 1 ...
	I1014 20:17:32.989829  421402 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1014 20:17:32.989845  421402 out.go:374] Setting ErrFile to fd 2...
	I1014 20:17:32.989851  421402 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1014 20:17:32.990172  421402 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-364627/.minikube/bin
	I1014 20:17:32.991056  421402 out.go:368] Setting JSON to false
	I1014 20:17:32.992860  421402 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":7196,"bootTime":1760465857,"procs":326,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1014 20:17:32.992967  421402 start.go:141] virtualization: kvm guest
	I1014 20:17:32.995056  421402 out.go:179] * [bridge-880673] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1014 20:17:32.996549  421402 out.go:179]   - MINIKUBE_LOCATION=21409
	I1014 20:17:32.996532  421402 notify.go:220] Checking for updates...
	I1014 20:17:33.000156  421402 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1014 20:17:33.001647  421402 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21409-364627/kubeconfig
	I1014 20:17:33.003125  421402 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21409-364627/.minikube
	I1014 20:17:33.007989  421402 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1014 20:17:33.009484  421402 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1014 20:17:33.011588  421402 config.go:182] Loaded profile config "embed-certs-158674": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1014 20:17:33.011769  421402 config.go:182] Loaded profile config "enable-default-cni-880673": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1014 20:17:33.011928  421402 config.go:182] Loaded profile config "flannel-880673": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1014 20:17:33.012093  421402 driver.go:421] Setting default libvirt URI to qemu:///system
	I1014 20:17:33.059258  421402 out.go:179] * Using the kvm2 driver based on user configuration
	I1014 20:17:33.060454  421402 start.go:305] selected driver: kvm2
	I1014 20:17:33.060476  421402 start.go:925] validating driver "kvm2" against <nil>
	I1014 20:17:33.060492  421402 start.go:936] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1014 20:17:33.061267  421402 install.go:66] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1014 20:17:33.061387  421402 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21409-364627/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1014 20:17:33.077958  421402 install.go:163] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I1014 20:17:33.077999  421402 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21409-364627/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1014 20:17:33.095092  421402 install.go:163] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I1014 20:17:33.095155  421402 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1014 20:17:33.095523  421402 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1014 20:17:33.095569  421402 cni.go:84] Creating CNI manager for "bridge"
	I1014 20:17:33.095578  421402 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1014 20:17:33.095654  421402 start.go:349] cluster config:
	{Name:bridge-880673 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:bridge-880673 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I1014 20:17:33.095800  421402 iso.go:125] acquiring lock: {Name:mk340b9c7aeea9c9c1eaad0b6560c7e40df5ab00 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1014 20:17:33.098425  421402 out.go:179] * Starting "bridge-880673" primary control-plane node in "bridge-880673" cluster
	I1014 20:17:30.010628  421087 out.go:252] * Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1014 20:17:30.010787  421087 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 20:17:30.010834  421087 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 20:17:30.028645  421087 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35821
	I1014 20:17:30.029184  421087 main.go:141] libmachine: () Calling .GetVersion
	I1014 20:17:30.029738  421087 main.go:141] libmachine: Using API Version  1
	I1014 20:17:30.029764  421087 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 20:17:30.030161  421087 main.go:141] libmachine: () Calling .GetMachineName
	I1014 20:17:30.030410  421087 main.go:141] libmachine: (flannel-880673) Calling .GetMachineName
	I1014 20:17:30.030581  421087 main.go:141] libmachine: (flannel-880673) Calling .DriverName
	I1014 20:17:30.030784  421087 start.go:159] libmachine.API.Create for "flannel-880673" (driver="kvm2")
	I1014 20:17:30.030820  421087 client.go:168] LocalClient.Create starting
	I1014 20:17:30.030865  421087 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21409-364627/.minikube/certs/ca.pem
	I1014 20:17:30.030912  421087 main.go:141] libmachine: Decoding PEM data...
	I1014 20:17:30.030940  421087 main.go:141] libmachine: Parsing certificate...
	I1014 20:17:30.031019  421087 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21409-364627/.minikube/certs/cert.pem
	I1014 20:17:30.031060  421087 main.go:141] libmachine: Decoding PEM data...
	I1014 20:17:30.031074  421087 main.go:141] libmachine: Parsing certificate...
	I1014 20:17:30.031099  421087 main.go:141] libmachine: Running pre-create checks...
	I1014 20:17:30.031112  421087 main.go:141] libmachine: (flannel-880673) Calling .PreCreateCheck
	I1014 20:17:30.031527  421087 main.go:141] libmachine: (flannel-880673) Calling .GetConfigRaw
	I1014 20:17:30.032027  421087 main.go:141] libmachine: Creating machine...
	I1014 20:17:30.032044  421087 main.go:141] libmachine: (flannel-880673) Calling .Create
	I1014 20:17:30.032196  421087 main.go:141] libmachine: (flannel-880673) creating domain...
	I1014 20:17:30.032211  421087 main.go:141] libmachine: (flannel-880673) creating network...
	I1014 20:17:30.033766  421087 main.go:141] libmachine: (flannel-880673) DBG | found existing default network
	I1014 20:17:30.033965  421087 main.go:141] libmachine: (flannel-880673) DBG | <network connections='3'>
	I1014 20:17:30.033988  421087 main.go:141] libmachine: (flannel-880673) DBG |   <name>default</name>
	I1014 20:17:30.033999  421087 main.go:141] libmachine: (flannel-880673) DBG |   <uuid>c61344c2-dba2-46dd-a21a-34776d235985</uuid>
	I1014 20:17:30.034006  421087 main.go:141] libmachine: (flannel-880673) DBG |   <forward mode='nat'>
	I1014 20:17:30.034014  421087 main.go:141] libmachine: (flannel-880673) DBG |     <nat>
	I1014 20:17:30.034024  421087 main.go:141] libmachine: (flannel-880673) DBG |       <port start='1024' end='65535'/>
	I1014 20:17:30.034032  421087 main.go:141] libmachine: (flannel-880673) DBG |     </nat>
	I1014 20:17:30.034042  421087 main.go:141] libmachine: (flannel-880673) DBG |   </forward>
	I1014 20:17:30.034051  421087 main.go:141] libmachine: (flannel-880673) DBG |   <bridge name='virbr0' stp='on' delay='0'/>
	I1014 20:17:30.034060  421087 main.go:141] libmachine: (flannel-880673) DBG |   <mac address='52:54:00:10:a2:1d'/>
	I1014 20:17:30.034079  421087 main.go:141] libmachine: (flannel-880673) DBG |   <ip address='192.168.122.1' netmask='255.255.255.0'>
	I1014 20:17:30.034096  421087 main.go:141] libmachine: (flannel-880673) DBG |     <dhcp>
	I1014 20:17:30.034125  421087 main.go:141] libmachine: (flannel-880673) DBG |       <range start='192.168.122.2' end='192.168.122.254'/>
	I1014 20:17:30.034137  421087 main.go:141] libmachine: (flannel-880673) DBG |     </dhcp>
	I1014 20:17:30.034145  421087 main.go:141] libmachine: (flannel-880673) DBG |   </ip>
	I1014 20:17:30.034152  421087 main.go:141] libmachine: (flannel-880673) DBG | </network>
	I1014 20:17:30.034162  421087 main.go:141] libmachine: (flannel-880673) DBG | 
	I1014 20:17:30.035359  421087 main.go:141] libmachine: (flannel-880673) DBG | I1014 20:17:30.035159  421144 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000123b20}
	I1014 20:17:30.035396  421087 main.go:141] libmachine: (flannel-880673) DBG | defining private network:
	I1014 20:17:30.035426  421087 main.go:141] libmachine: (flannel-880673) DBG | 
	I1014 20:17:30.035439  421087 main.go:141] libmachine: (flannel-880673) DBG | <network>
	I1014 20:17:30.035447  421087 main.go:141] libmachine: (flannel-880673) DBG |   <name>mk-flannel-880673</name>
	I1014 20:17:30.035453  421087 main.go:141] libmachine: (flannel-880673) DBG |   <dns enable='no'/>
	I1014 20:17:30.035461  421087 main.go:141] libmachine: (flannel-880673) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I1014 20:17:30.035467  421087 main.go:141] libmachine: (flannel-880673) DBG |     <dhcp>
	I1014 20:17:30.035475  421087 main.go:141] libmachine: (flannel-880673) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I1014 20:17:30.035480  421087 main.go:141] libmachine: (flannel-880673) DBG |     </dhcp>
	I1014 20:17:30.035487  421087 main.go:141] libmachine: (flannel-880673) DBG |   </ip>
	I1014 20:17:30.035493  421087 main.go:141] libmachine: (flannel-880673) DBG | </network>
	I1014 20:17:30.035502  421087 main.go:141] libmachine: (flannel-880673) DBG | 
	I1014 20:17:30.041667  421087 main.go:141] libmachine: (flannel-880673) DBG | creating private network mk-flannel-880673 192.168.39.0/24...
	I1014 20:17:30.127637  421087 main.go:141] libmachine: (flannel-880673) DBG | private network mk-flannel-880673 192.168.39.0/24 created
	I1014 20:17:30.127999  421087 main.go:141] libmachine: (flannel-880673) DBG | <network>
	I1014 20:17:30.128023  421087 main.go:141] libmachine: (flannel-880673) DBG |   <name>mk-flannel-880673</name>
	I1014 20:17:30.128038  421087 main.go:141] libmachine: (flannel-880673) setting up store path in /home/jenkins/minikube-integration/21409-364627/.minikube/machines/flannel-880673 ...
	I1014 20:17:30.128060  421087 main.go:141] libmachine: (flannel-880673) building disk image from file:///home/jenkins/minikube-integration/21409-364627/.minikube/cache/iso/amd64/minikube-v1.37.0-1758198818-20370-amd64.iso
	I1014 20:17:30.128074  421087 main.go:141] libmachine: (flannel-880673) DBG |   <uuid>c5a771e5-e794-47b9-85b0-f17e7652bf2d</uuid>
	I1014 20:17:30.128085  421087 main.go:141] libmachine: (flannel-880673) DBG |   <bridge name='virbr2' stp='on' delay='0'/>
	I1014 20:17:30.128109  421087 main.go:141] libmachine: (flannel-880673) DBG |   <mac address='52:54:00:5d:dc:bd'/>
	I1014 20:17:30.128132  421087 main.go:141] libmachine: (flannel-880673) Downloading /home/jenkins/minikube-integration/21409-364627/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/21409-364627/.minikube/cache/iso/amd64/minikube-v1.37.0-1758198818-20370-amd64.iso...
	I1014 20:17:30.128141  421087 main.go:141] libmachine: (flannel-880673) DBG |   <dns enable='no'/>
	I1014 20:17:30.128155  421087 main.go:141] libmachine: (flannel-880673) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I1014 20:17:30.128162  421087 main.go:141] libmachine: (flannel-880673) DBG |     <dhcp>
	I1014 20:17:30.128172  421087 main.go:141] libmachine: (flannel-880673) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I1014 20:17:30.128182  421087 main.go:141] libmachine: (flannel-880673) DBG |     </dhcp>
	I1014 20:17:30.128191  421087 main.go:141] libmachine: (flannel-880673) DBG |   </ip>
	I1014 20:17:30.128201  421087 main.go:141] libmachine: (flannel-880673) DBG | </network>
	I1014 20:17:30.128212  421087 main.go:141] libmachine: (flannel-880673) DBG | 
	I1014 20:17:30.128237  421087 main.go:141] libmachine: (flannel-880673) DBG | I1014 20:17:30.127980  421144 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/21409-364627/.minikube
	I1014 20:17:30.429228  421087 main.go:141] libmachine: (flannel-880673) DBG | I1014 20:17:30.429048  421144 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/21409-364627/.minikube/machines/flannel-880673/id_rsa...
	I1014 20:17:31.000581  421087 main.go:141] libmachine: (flannel-880673) DBG | I1014 20:17:31.000432  421144 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/21409-364627/.minikube/machines/flannel-880673/flannel-880673.rawdisk...
	I1014 20:17:31.000623  421087 main.go:141] libmachine: (flannel-880673) DBG | Writing magic tar header
	I1014 20:17:31.000650  421087 main.go:141] libmachine: (flannel-880673) DBG | Writing SSH key tar header
	I1014 20:17:31.000710  421087 main.go:141] libmachine: (flannel-880673) DBG | I1014 20:17:31.000643  421144 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/21409-364627/.minikube/machines/flannel-880673 ...
	I1014 20:17:31.000786  421087 main.go:141] libmachine: (flannel-880673) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21409-364627/.minikube/machines/flannel-880673
	I1014 20:17:31.000844  421087 main.go:141] libmachine: (flannel-880673) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21409-364627/.minikube/machines
	I1014 20:17:31.000876  421087 main.go:141] libmachine: (flannel-880673) setting executable bit set on /home/jenkins/minikube-integration/21409-364627/.minikube/machines/flannel-880673 (perms=drwx------)
	I1014 20:17:31.000888  421087 main.go:141] libmachine: (flannel-880673) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21409-364627/.minikube
	I1014 20:17:31.000907  421087 main.go:141] libmachine: (flannel-880673) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21409-364627
	I1014 20:17:31.000915  421087 main.go:141] libmachine: (flannel-880673) DBG | checking permissions on dir: /home/jenkins/minikube-integration
	I1014 20:17:31.000940  421087 main.go:141] libmachine: (flannel-880673) setting executable bit set on /home/jenkins/minikube-integration/21409-364627/.minikube/machines (perms=drwxr-xr-x)
	I1014 20:17:31.000951  421087 main.go:141] libmachine: (flannel-880673) DBG | checking permissions on dir: /home/jenkins
	I1014 20:17:31.000963  421087 main.go:141] libmachine: (flannel-880673) DBG | checking permissions on dir: /home
	I1014 20:17:31.000973  421087 main.go:141] libmachine: (flannel-880673) setting executable bit set on /home/jenkins/minikube-integration/21409-364627/.minikube (perms=drwxr-xr-x)
	I1014 20:17:31.000994  421087 main.go:141] libmachine: (flannel-880673) setting executable bit set on /home/jenkins/minikube-integration/21409-364627 (perms=drwxrwxr-x)
	I1014 20:17:31.001007  421087 main.go:141] libmachine: (flannel-880673) setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1014 20:17:31.001019  421087 main.go:141] libmachine: (flannel-880673) setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1014 20:17:31.001028  421087 main.go:141] libmachine: (flannel-880673) defining domain...
	I1014 20:17:31.001071  421087 main.go:141] libmachine: (flannel-880673) DBG | skipping /home - not owner
	I1014 20:17:31.002262  421087 main.go:141] libmachine: (flannel-880673) defining domain using XML: 
	I1014 20:17:31.002290  421087 main.go:141] libmachine: (flannel-880673) <domain type='kvm'>
	I1014 20:17:31.002302  421087 main.go:141] libmachine: (flannel-880673)   <name>flannel-880673</name>
	I1014 20:17:31.002327  421087 main.go:141] libmachine: (flannel-880673)   <memory unit='MiB'>3072</memory>
	I1014 20:17:31.002358  421087 main.go:141] libmachine: (flannel-880673)   <vcpu>2</vcpu>
	I1014 20:17:31.002385  421087 main.go:141] libmachine: (flannel-880673)   <features>
	I1014 20:17:31.002407  421087 main.go:141] libmachine: (flannel-880673)     <acpi/>
	I1014 20:17:31.002421  421087 main.go:141] libmachine: (flannel-880673)     <apic/>
	I1014 20:17:31.002433  421087 main.go:141] libmachine: (flannel-880673)     <pae/>
	I1014 20:17:31.002439  421087 main.go:141] libmachine: (flannel-880673)   </features>
	I1014 20:17:31.002448  421087 main.go:141] libmachine: (flannel-880673)   <cpu mode='host-passthrough'>
	I1014 20:17:31.002459  421087 main.go:141] libmachine: (flannel-880673)   </cpu>
	I1014 20:17:31.002483  421087 main.go:141] libmachine: (flannel-880673)   <os>
	I1014 20:17:31.002501  421087 main.go:141] libmachine: (flannel-880673)     <type>hvm</type>
	I1014 20:17:31.002510  421087 main.go:141] libmachine: (flannel-880673)     <boot dev='cdrom'/>
	I1014 20:17:31.002521  421087 main.go:141] libmachine: (flannel-880673)     <boot dev='hd'/>
	I1014 20:17:31.002566  421087 main.go:141] libmachine: (flannel-880673)     <bootmenu enable='no'/>
	I1014 20:17:31.002586  421087 main.go:141] libmachine: (flannel-880673)   </os>
	I1014 20:17:31.002601  421087 main.go:141] libmachine: (flannel-880673)   <devices>
	I1014 20:17:31.002610  421087 main.go:141] libmachine: (flannel-880673)     <disk type='file' device='cdrom'>
	I1014 20:17:31.002633  421087 main.go:141] libmachine: (flannel-880673)       <source file='/home/jenkins/minikube-integration/21409-364627/.minikube/machines/flannel-880673/boot2docker.iso'/>
	I1014 20:17:31.002643  421087 main.go:141] libmachine: (flannel-880673)       <target dev='hdc' bus='scsi'/>
	I1014 20:17:31.002656  421087 main.go:141] libmachine: (flannel-880673)       <readonly/>
	I1014 20:17:31.002663  421087 main.go:141] libmachine: (flannel-880673)     </disk>
	I1014 20:17:31.002674  421087 main.go:141] libmachine: (flannel-880673)     <disk type='file' device='disk'>
	I1014 20:17:31.002687  421087 main.go:141] libmachine: (flannel-880673)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1014 20:17:31.002699  421087 main.go:141] libmachine: (flannel-880673)       <source file='/home/jenkins/minikube-integration/21409-364627/.minikube/machines/flannel-880673/flannel-880673.rawdisk'/>
	I1014 20:17:31.002712  421087 main.go:141] libmachine: (flannel-880673)       <target dev='hda' bus='virtio'/>
	I1014 20:17:31.002726  421087 main.go:141] libmachine: (flannel-880673)     </disk>
	I1014 20:17:31.002738  421087 main.go:141] libmachine: (flannel-880673)     <interface type='network'>
	I1014 20:17:31.002750  421087 main.go:141] libmachine: (flannel-880673)       <source network='mk-flannel-880673'/>
	I1014 20:17:31.002759  421087 main.go:141] libmachine: (flannel-880673)       <model type='virtio'/>
	I1014 20:17:31.002782  421087 main.go:141] libmachine: (flannel-880673)     </interface>
	I1014 20:17:31.002801  421087 main.go:141] libmachine: (flannel-880673)     <interface type='network'>
	I1014 20:17:31.002819  421087 main.go:141] libmachine: (flannel-880673)       <source network='default'/>
	I1014 20:17:31.002840  421087 main.go:141] libmachine: (flannel-880673)       <model type='virtio'/>
	I1014 20:17:31.002852  421087 main.go:141] libmachine: (flannel-880673)     </interface>
	I1014 20:17:31.002871  421087 main.go:141] libmachine: (flannel-880673)     <serial type='pty'>
	I1014 20:17:31.002884  421087 main.go:141] libmachine: (flannel-880673)       <target port='0'/>
	I1014 20:17:31.002895  421087 main.go:141] libmachine: (flannel-880673)     </serial>
	I1014 20:17:31.002909  421087 main.go:141] libmachine: (flannel-880673)     <console type='pty'>
	I1014 20:17:31.002916  421087 main.go:141] libmachine: (flannel-880673)       <target type='serial' port='0'/>
	I1014 20:17:31.002927  421087 main.go:141] libmachine: (flannel-880673)     </console>
	I1014 20:17:31.002937  421087 main.go:141] libmachine: (flannel-880673)     <rng model='virtio'>
	I1014 20:17:31.002949  421087 main.go:141] libmachine: (flannel-880673)       <backend model='random'>/dev/random</backend>
	I1014 20:17:31.002962  421087 main.go:141] libmachine: (flannel-880673)     </rng>
	I1014 20:17:31.002974  421087 main.go:141] libmachine: (flannel-880673)   </devices>
	I1014 20:17:31.002982  421087 main.go:141] libmachine: (flannel-880673) </domain>
	I1014 20:17:31.002993  421087 main.go:141] libmachine: (flannel-880673) 
	I1014 20:17:31.008419  421087 main.go:141] libmachine: (flannel-880673) DBG | domain flannel-880673 has defined MAC address 52:54:00:25:31:9e in network default
	I1014 20:17:31.009073  421087 main.go:141] libmachine: (flannel-880673) starting domain...
	I1014 20:17:31.009113  421087 main.go:141] libmachine: (flannel-880673) DBG | domain flannel-880673 has defined MAC address 52:54:00:d6:0d:31 in network mk-flannel-880673
	I1014 20:17:31.009122  421087 main.go:141] libmachine: (flannel-880673) ensuring networks are active...
	I1014 20:17:31.009886  421087 main.go:141] libmachine: (flannel-880673) Ensuring network default is active
	I1014 20:17:31.010346  421087 main.go:141] libmachine: (flannel-880673) Ensuring network mk-flannel-880673 is active
	I1014 20:17:31.011061  421087 main.go:141] libmachine: (flannel-880673) getting domain XML...
	I1014 20:17:31.012375  421087 main.go:141] libmachine: (flannel-880673) DBG | starting domain XML:
	I1014 20:17:31.012399  421087 main.go:141] libmachine: (flannel-880673) DBG | <domain type='kvm'>
	I1014 20:17:31.012419  421087 main.go:141] libmachine: (flannel-880673) DBG |   <name>flannel-880673</name>
	I1014 20:17:31.012437  421087 main.go:141] libmachine: (flannel-880673) DBG |   <uuid>dd12b5ae-cea5-4553-b657-8781ab815471</uuid>
	I1014 20:17:31.012446  421087 main.go:141] libmachine: (flannel-880673) DBG |   <memory unit='KiB'>3145728</memory>
	I1014 20:17:31.012457  421087 main.go:141] libmachine: (flannel-880673) DBG |   <currentMemory unit='KiB'>3145728</currentMemory>
	I1014 20:17:31.012467  421087 main.go:141] libmachine: (flannel-880673) DBG |   <vcpu placement='static'>2</vcpu>
	I1014 20:17:31.012479  421087 main.go:141] libmachine: (flannel-880673) DBG |   <os>
	I1014 20:17:31.012491  421087 main.go:141] libmachine: (flannel-880673) DBG |     <type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
	I1014 20:17:31.012501  421087 main.go:141] libmachine: (flannel-880673) DBG |     <boot dev='cdrom'/>
	I1014 20:17:31.012529  421087 main.go:141] libmachine: (flannel-880673) DBG |     <boot dev='hd'/>
	I1014 20:17:31.012552  421087 main.go:141] libmachine: (flannel-880673) DBG |     <bootmenu enable='no'/>
	I1014 20:17:31.012564  421087 main.go:141] libmachine: (flannel-880673) DBG |   </os>
	I1014 20:17:31.012574  421087 main.go:141] libmachine: (flannel-880673) DBG |   <features>
	I1014 20:17:31.012583  421087 main.go:141] libmachine: (flannel-880673) DBG |     <acpi/>
	I1014 20:17:31.012592  421087 main.go:141] libmachine: (flannel-880673) DBG |     <apic/>
	I1014 20:17:31.012607  421087 main.go:141] libmachine: (flannel-880673) DBG |     <pae/>
	I1014 20:17:31.012616  421087 main.go:141] libmachine: (flannel-880673) DBG |   </features>
	I1014 20:17:31.012623  421087 main.go:141] libmachine: (flannel-880673) DBG |   <cpu mode='host-passthrough' check='none' migratable='on'/>
	I1014 20:17:31.012630  421087 main.go:141] libmachine: (flannel-880673) DBG |   <clock offset='utc'/>
	I1014 20:17:31.012646  421087 main.go:141] libmachine: (flannel-880673) DBG |   <on_poweroff>destroy</on_poweroff>
	I1014 20:17:31.012672  421087 main.go:141] libmachine: (flannel-880673) DBG |   <on_reboot>restart</on_reboot>
	I1014 20:17:31.012682  421087 main.go:141] libmachine: (flannel-880673) DBG |   <on_crash>destroy</on_crash>
	I1014 20:17:31.012686  421087 main.go:141] libmachine: (flannel-880673) DBG |   <devices>
	I1014 20:17:31.012695  421087 main.go:141] libmachine: (flannel-880673) DBG |     <emulator>/usr/bin/qemu-system-x86_64</emulator>
	I1014 20:17:31.012702  421087 main.go:141] libmachine: (flannel-880673) DBG |     <disk type='file' device='cdrom'>
	I1014 20:17:31.012715  421087 main.go:141] libmachine: (flannel-880673) DBG |       <driver name='qemu' type='raw'/>
	I1014 20:17:31.012731  421087 main.go:141] libmachine: (flannel-880673) DBG |       <source file='/home/jenkins/minikube-integration/21409-364627/.minikube/machines/flannel-880673/boot2docker.iso'/>
	I1014 20:17:31.012743  421087 main.go:141] libmachine: (flannel-880673) DBG |       <target dev='hdc' bus='scsi'/>
	I1014 20:17:31.012756  421087 main.go:141] libmachine: (flannel-880673) DBG |       <readonly/>
	I1014 20:17:31.012783  421087 main.go:141] libmachine: (flannel-880673) DBG |       <address type='drive' controller='0' bus='0' target='0' unit='2'/>
	I1014 20:17:31.012803  421087 main.go:141] libmachine: (flannel-880673) DBG |     </disk>
	I1014 20:17:31.012819  421087 main.go:141] libmachine: (flannel-880673) DBG |     <disk type='file' device='disk'>
	I1014 20:17:31.012836  421087 main.go:141] libmachine: (flannel-880673) DBG |       <driver name='qemu' type='raw' io='threads'/>
	I1014 20:17:31.012852  421087 main.go:141] libmachine: (flannel-880673) DBG |       <source file='/home/jenkins/minikube-integration/21409-364627/.minikube/machines/flannel-880673/flannel-880673.rawdisk'/>
	I1014 20:17:31.012861  421087 main.go:141] libmachine: (flannel-880673) DBG |       <target dev='hda' bus='virtio'/>
	I1014 20:17:31.012869  421087 main.go:141] libmachine: (flannel-880673) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
	I1014 20:17:31.012879  421087 main.go:141] libmachine: (flannel-880673) DBG |     </disk>
	I1014 20:17:31.012890  421087 main.go:141] libmachine: (flannel-880673) DBG |     <controller type='usb' index='0' model='piix3-uhci'>
	I1014 20:17:31.012899  421087 main.go:141] libmachine: (flannel-880673) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
	I1014 20:17:31.012919  421087 main.go:141] libmachine: (flannel-880673) DBG |     </controller>
	I1014 20:17:31.012938  421087 main.go:141] libmachine: (flannel-880673) DBG |     <controller type='pci' index='0' model='pci-root'/>
	I1014 20:17:31.012951  421087 main.go:141] libmachine: (flannel-880673) DBG |     <controller type='scsi' index='0' model='lsilogic'>
	I1014 20:17:31.012961  421087 main.go:141] libmachine: (flannel-880673) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
	I1014 20:17:31.012995  421087 main.go:141] libmachine: (flannel-880673) DBG |     </controller>
	I1014 20:17:31.013011  421087 main.go:141] libmachine: (flannel-880673) DBG |     <interface type='network'>
	I1014 20:17:31.013022  421087 main.go:141] libmachine: (flannel-880673) DBG |       <mac address='52:54:00:d6:0d:31'/>
	I1014 20:17:31.013033  421087 main.go:141] libmachine: (flannel-880673) DBG |       <source network='mk-flannel-880673'/>
	I1014 20:17:31.013043  421087 main.go:141] libmachine: (flannel-880673) DBG |       <model type='virtio'/>
	I1014 20:17:31.013060  421087 main.go:141] libmachine: (flannel-880673) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
	I1014 20:17:31.013072  421087 main.go:141] libmachine: (flannel-880673) DBG |     </interface>
	I1014 20:17:31.013092  421087 main.go:141] libmachine: (flannel-880673) DBG |     <interface type='network'>
	I1014 20:17:31.013105  421087 main.go:141] libmachine: (flannel-880673) DBG |       <mac address='52:54:00:25:31:9e'/>
	I1014 20:17:31.013115  421087 main.go:141] libmachine: (flannel-880673) DBG |       <source network='default'/>
	I1014 20:17:31.013126  421087 main.go:141] libmachine: (flannel-880673) DBG |       <model type='virtio'/>
	I1014 20:17:31.013147  421087 main.go:141] libmachine: (flannel-880673) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
	I1014 20:17:31.013159  421087 main.go:141] libmachine: (flannel-880673) DBG |     </interface>
	I1014 20:17:31.013166  421087 main.go:141] libmachine: (flannel-880673) DBG |     <serial type='pty'>
	I1014 20:17:31.013175  421087 main.go:141] libmachine: (flannel-880673) DBG |       <target type='isa-serial' port='0'>
	I1014 20:17:31.013185  421087 main.go:141] libmachine: (flannel-880673) DBG |         <model name='isa-serial'/>
	I1014 20:17:31.013193  421087 main.go:141] libmachine: (flannel-880673) DBG |       </target>
	I1014 20:17:31.013202  421087 main.go:141] libmachine: (flannel-880673) DBG |     </serial>
	I1014 20:17:31.013245  421087 main.go:141] libmachine: (flannel-880673) DBG |     <console type='pty'>
	I1014 20:17:31.013278  421087 main.go:141] libmachine: (flannel-880673) DBG |       <target type='serial' port='0'/>
	I1014 20:17:31.013288  421087 main.go:141] libmachine: (flannel-880673) DBG |     </console>
	I1014 20:17:31.013295  421087 main.go:141] libmachine: (flannel-880673) DBG |     <input type='mouse' bus='ps2'/>
	I1014 20:17:31.013304  421087 main.go:141] libmachine: (flannel-880673) DBG |     <input type='keyboard' bus='ps2'/>
	I1014 20:17:31.013323  421087 main.go:141] libmachine: (flannel-880673) DBG |     <audio id='1' type='none'/>
	I1014 20:17:31.013338  421087 main.go:141] libmachine: (flannel-880673) DBG |     <memballoon model='virtio'>
	I1014 20:17:31.013347  421087 main.go:141] libmachine: (flannel-880673) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
	I1014 20:17:31.013355  421087 main.go:141] libmachine: (flannel-880673) DBG |     </memballoon>
	I1014 20:17:31.013365  421087 main.go:141] libmachine: (flannel-880673) DBG |     <rng model='virtio'>
	I1014 20:17:31.013374  421087 main.go:141] libmachine: (flannel-880673) DBG |       <backend model='random'>/dev/random</backend>
	I1014 20:17:31.013389  421087 main.go:141] libmachine: (flannel-880673) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
	I1014 20:17:31.013396  421087 main.go:141] libmachine: (flannel-880673) DBG |     </rng>
	I1014 20:17:31.013402  421087 main.go:141] libmachine: (flannel-880673) DBG |   </devices>
	I1014 20:17:31.013410  421087 main.go:141] libmachine: (flannel-880673) DBG | </domain>
	I1014 20:17:31.013416  421087 main.go:141] libmachine: (flannel-880673) DBG | 
	I1014 20:17:32.497442  421087 main.go:141] libmachine: (flannel-880673) waiting for domain to start...
	I1014 20:17:32.499079  421087 main.go:141] libmachine: (flannel-880673) domain is now running
	I1014 20:17:32.499111  421087 main.go:141] libmachine: (flannel-880673) waiting for IP...
	I1014 20:17:32.500226  421087 main.go:141] libmachine: (flannel-880673) DBG | domain flannel-880673 has defined MAC address 52:54:00:d6:0d:31 in network mk-flannel-880673
	I1014 20:17:32.501130  421087 main.go:141] libmachine: (flannel-880673) DBG | no network interface addresses found for domain flannel-880673 (source=lease)
	I1014 20:17:32.501160  421087 main.go:141] libmachine: (flannel-880673) DBG | trying to list again with source=arp
	I1014 20:17:32.501671  421087 main.go:141] libmachine: (flannel-880673) DBG | unable to find current IP address of domain flannel-880673 in network mk-flannel-880673 (interfaces detected: [])
	I1014 20:17:32.501780  421087 main.go:141] libmachine: (flannel-880673) DBG | I1014 20:17:32.501708  421144 retry.go:31] will retry after 305.051771ms: waiting for domain to come up
	I1014 20:17:32.808976  421087 main.go:141] libmachine: (flannel-880673) DBG | domain flannel-880673 has defined MAC address 52:54:00:d6:0d:31 in network mk-flannel-880673
	I1014 20:17:32.810041  421087 main.go:141] libmachine: (flannel-880673) DBG | no network interface addresses found for domain flannel-880673 (source=lease)
	I1014 20:17:32.810067  421087 main.go:141] libmachine: (flannel-880673) DBG | trying to list again with source=arp
	I1014 20:17:32.810866  421087 main.go:141] libmachine: (flannel-880673) DBG | unable to find current IP address of domain flannel-880673 in network mk-flannel-880673 (interfaces detected: [])
	I1014 20:17:32.811102  421087 main.go:141] libmachine: (flannel-880673) DBG | I1014 20:17:32.810990  421144 retry.go:31] will retry after 317.455974ms: waiting for domain to come up
	I1014 20:17:33.130005  421087 main.go:141] libmachine: (flannel-880673) DBG | domain flannel-880673 has defined MAC address 52:54:00:d6:0d:31 in network mk-flannel-880673
	I1014 20:17:33.130798  421087 main.go:141] libmachine: (flannel-880673) DBG | no network interface addresses found for domain flannel-880673 (source=lease)
	I1014 20:17:33.130828  421087 main.go:141] libmachine: (flannel-880673) DBG | trying to list again with source=arp
	I1014 20:17:33.131276  421087 main.go:141] libmachine: (flannel-880673) DBG | unable to find current IP address of domain flannel-880673 in network mk-flannel-880673 (interfaces detected: [])
	I1014 20:17:33.131297  421087 main.go:141] libmachine: (flannel-880673) DBG | I1014 20:17:33.131261  421144 retry.go:31] will retry after 310.529894ms: waiting for domain to come up
	I1014 20:17:33.444064  421087 main.go:141] libmachine: (flannel-880673) DBG | domain flannel-880673 has defined MAC address 52:54:00:d6:0d:31 in network mk-flannel-880673
	I1014 20:17:33.444826  421087 main.go:141] libmachine: (flannel-880673) DBG | no network interface addresses found for domain flannel-880673 (source=lease)
	I1014 20:17:33.444865  421087 main.go:141] libmachine: (flannel-880673) DBG | trying to list again with source=arp
	I1014 20:17:33.445237  421087 main.go:141] libmachine: (flannel-880673) DBG | unable to find current IP address of domain flannel-880673 in network mk-flannel-880673 (interfaces detected: [])
	I1014 20:17:33.445267  421087 main.go:141] libmachine: (flannel-880673) DBG | I1014 20:17:33.445205  421144 retry.go:31] will retry after 585.28514ms: waiting for domain to come up
	I1014 20:17:34.032915  421087 main.go:141] libmachine: (flannel-880673) DBG | domain flannel-880673 has defined MAC address 52:54:00:d6:0d:31 in network mk-flannel-880673
	I1014 20:17:34.033664  421087 main.go:141] libmachine: (flannel-880673) DBG | no network interface addresses found for domain flannel-880673 (source=lease)
	I1014 20:17:34.033693  421087 main.go:141] libmachine: (flannel-880673) DBG | trying to list again with source=arp
	I1014 20:17:34.034077  421087 main.go:141] libmachine: (flannel-880673) DBG | unable to find current IP address of domain flannel-880673 in network mk-flannel-880673 (interfaces detected: [])
	I1014 20:17:34.034129  421087 main.go:141] libmachine: (flannel-880673) DBG | I1014 20:17:34.034053  421144 retry.go:31] will retry after 747.322867ms: waiting for domain to come up
	I1014 20:17:34.783858  421087 main.go:141] libmachine: (flannel-880673) DBG | domain flannel-880673 has defined MAC address 52:54:00:d6:0d:31 in network mk-flannel-880673
	I1014 20:17:34.784696  421087 main.go:141] libmachine: (flannel-880673) DBG | no network interface addresses found for domain flannel-880673 (source=lease)
	I1014 20:17:34.784728  421087 main.go:141] libmachine: (flannel-880673) DBG | trying to list again with source=arp
	I1014 20:17:34.785194  421087 main.go:141] libmachine: (flannel-880673) DBG | unable to find current IP address of domain flannel-880673 in network mk-flannel-880673 (interfaces detected: [])
	I1014 20:17:34.785254  421087 main.go:141] libmachine: (flannel-880673) DBG | I1014 20:17:34.785162  421144 retry.go:31] will retry after 668.737068ms: waiting for domain to come up
	I1014 20:17:33.099654  421402 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1014 20:17:33.099715  421402 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21409-364627/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1014 20:17:33.099733  421402 cache.go:58] Caching tarball of preloaded images
	I1014 20:17:33.099879  421402 preload.go:233] Found /home/jenkins/minikube-integration/21409-364627/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1014 20:17:33.099896  421402 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1014 20:17:33.100050  421402 profile.go:143] Saving config to /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/bridge-880673/config.json ...
	I1014 20:17:33.100079  421402 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/bridge-880673/config.json: {Name:mk18ebb7d610401402586eb4b220796b84614a13 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 20:17:33.100282  421402 start.go:360] acquireMachinesLock for bridge-880673: {Name:mk52d449be3ec71c122454fdb0aeda759b1051fc Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1014 20:17:38.890389  418230 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1014 20:17:38.890506  418230 kubeadm.go:318] [preflight] Running pre-flight checks
	I1014 20:17:38.890678  418230 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1014 20:17:38.890809  418230 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1014 20:17:38.890950  418230 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1014 20:17:38.891038  418230 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1014 20:17:38.961932  418230 out.go:252]   - Generating certificates and keys ...
	I1014 20:17:38.962078  418230 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1014 20:17:38.962166  418230 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1014 20:17:38.962264  418230 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1014 20:17:38.962352  418230 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1014 20:17:38.962421  418230 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1014 20:17:38.962485  418230 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1014 20:17:38.962584  418230 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1014 20:17:38.962826  418230 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [enable-default-cni-880673 localhost] and IPs [192.168.72.117 127.0.0.1 ::1]
	I1014 20:17:38.962920  418230 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1014 20:17:38.963114  418230 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [enable-default-cni-880673 localhost] and IPs [192.168.72.117 127.0.0.1 ::1]
	I1014 20:17:38.963198  418230 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1014 20:17:38.963305  418230 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1014 20:17:38.963381  418230 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1014 20:17:38.963497  418230 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1014 20:17:38.963575  418230 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1014 20:17:38.963661  418230 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1014 20:17:38.963737  418230 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1014 20:17:38.963821  418230 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1014 20:17:38.963928  418230 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1014 20:17:38.964059  418230 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1014 20:17:38.964171  418230 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1014 20:17:39.026775  418230 out.go:252]   - Booting up control plane ...
	I1014 20:17:39.026925  418230 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1014 20:17:39.027023  418230 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1014 20:17:39.027117  418230 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1014 20:17:39.027269  418230 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1014 20:17:39.027422  418230 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1014 20:17:39.027596  418230 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1014 20:17:39.027733  418230 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1014 20:17:39.027789  418230 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1014 20:17:39.028004  418230 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1014 20:17:39.028177  418230 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1014 20:17:39.028268  418230 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.003674924s
	I1014 20:17:39.028418  418230 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1014 20:17:39.028554  418230 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.72.117:8443/livez
	I1014 20:17:39.028710  418230 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1014 20:17:39.028823  418230 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1014 20:17:39.028936  418230 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 2.008406755s
	I1014 20:17:39.029059  418230 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 3.666287067s
	I1014 20:17:39.029163  418230 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 5.502243083s
	I1014 20:17:39.029294  418230 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1014 20:17:39.029471  418230 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1014 20:17:39.029572  418230 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1014 20:17:39.029854  418230 kubeadm.go:318] [mark-control-plane] Marking the node enable-default-cni-880673 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1014 20:17:39.029946  418230 kubeadm.go:318] [bootstrap-token] Using token: 1mj9ds.b0l9y0w9wlsd6ew0
	I1014 20:17:39.097141  418230 out.go:252]   - Configuring RBAC rules ...
	I1014 20:17:39.097343  418230 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1014 20:17:39.097512  418230 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1014 20:17:39.097729  418230 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1014 20:17:39.097942  418230 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1014 20:17:39.098097  418230 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1014 20:17:39.098231  418230 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1014 20:17:39.098419  418230 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1014 20:17:39.098476  418230 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1014 20:17:39.098538  418230 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1014 20:17:39.098552  418230 kubeadm.go:318] 
	I1014 20:17:39.098668  418230 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1014 20:17:39.098688  418230 kubeadm.go:318] 
	I1014 20:17:39.098802  418230 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1014 20:17:39.098811  418230 kubeadm.go:318] 
	I1014 20:17:39.098840  418230 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1014 20:17:39.098905  418230 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1014 20:17:39.098975  418230 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1014 20:17:39.098984  418230 kubeadm.go:318] 
	I1014 20:17:39.099058  418230 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1014 20:17:39.099067  418230 kubeadm.go:318] 
	I1014 20:17:39.099131  418230 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1014 20:17:39.099140  418230 kubeadm.go:318] 
	I1014 20:17:39.099222  418230 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1014 20:17:39.099357  418230 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1014 20:17:39.099447  418230 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1014 20:17:39.099455  418230 kubeadm.go:318] 
	I1014 20:17:39.099561  418230 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1014 20:17:39.099699  418230 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1014 20:17:39.099718  418230 kubeadm.go:318] 
	I1014 20:17:39.099838  418230 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token 1mj9ds.b0l9y0w9wlsd6ew0 \
	I1014 20:17:39.099991  418230 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:a62c4f11982899337eae6d2ac06abdf2a9e3ffb256514381b9172598c708cb1d \
	I1014 20:17:39.100028  418230 kubeadm.go:318] 	--control-plane 
	I1014 20:17:39.100033  418230 kubeadm.go:318] 
	I1014 20:17:39.100147  418230 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1014 20:17:39.100162  418230 kubeadm.go:318] 
	I1014 20:17:39.100280  418230 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token 1mj9ds.b0l9y0w9wlsd6ew0 \
	I1014 20:17:39.100457  418230 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:a62c4f11982899337eae6d2ac06abdf2a9e3ffb256514381b9172598c708cb1d 
	I1014 20:17:39.100474  418230 cni.go:84] Creating CNI manager for "bridge"
	I1014 20:17:39.118443  418230 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I1014 20:17:35.456013  421087 main.go:141] libmachine: (flannel-880673) DBG | domain flannel-880673 has defined MAC address 52:54:00:d6:0d:31 in network mk-flannel-880673
	I1014 20:17:35.456740  421087 main.go:141] libmachine: (flannel-880673) DBG | no network interface addresses found for domain flannel-880673 (source=lease)
	I1014 20:17:35.456768  421087 main.go:141] libmachine: (flannel-880673) DBG | trying to list again with source=arp
	I1014 20:17:35.457270  421087 main.go:141] libmachine: (flannel-880673) DBG | unable to find current IP address of domain flannel-880673 in network mk-flannel-880673 (interfaces detected: [])
	I1014 20:17:35.457334  421087 main.go:141] libmachine: (flannel-880673) DBG | I1014 20:17:35.457235  421144 retry.go:31] will retry after 991.153351ms: waiting for domain to come up
	I1014 20:17:36.450676  421087 main.go:141] libmachine: (flannel-880673) DBG | domain flannel-880673 has defined MAC address 52:54:00:d6:0d:31 in network mk-flannel-880673
	I1014 20:17:36.451355  421087 main.go:141] libmachine: (flannel-880673) DBG | no network interface addresses found for domain flannel-880673 (source=lease)
	I1014 20:17:36.451390  421087 main.go:141] libmachine: (flannel-880673) DBG | trying to list again with source=arp
	I1014 20:17:36.451760  421087 main.go:141] libmachine: (flannel-880673) DBG | unable to find current IP address of domain flannel-880673 in network mk-flannel-880673 (interfaces detected: [])
	I1014 20:17:36.451811  421087 main.go:141] libmachine: (flannel-880673) DBG | I1014 20:17:36.451748  421144 retry.go:31] will retry after 1.136068871s: waiting for domain to come up
	I1014 20:17:37.589863  421087 main.go:141] libmachine: (flannel-880673) DBG | domain flannel-880673 has defined MAC address 52:54:00:d6:0d:31 in network mk-flannel-880673
	I1014 20:17:37.590717  421087 main.go:141] libmachine: (flannel-880673) DBG | no network interface addresses found for domain flannel-880673 (source=lease)
	I1014 20:17:37.590749  421087 main.go:141] libmachine: (flannel-880673) DBG | trying to list again with source=arp
	I1014 20:17:37.591025  421087 main.go:141] libmachine: (flannel-880673) DBG | unable to find current IP address of domain flannel-880673 in network mk-flannel-880673 (interfaces detected: [])
	I1014 20:17:37.591091  421087 main.go:141] libmachine: (flannel-880673) DBG | I1014 20:17:37.591024  421144 retry.go:31] will retry after 1.34377164s: waiting for domain to come up
	I1014 20:17:38.936574  421087 main.go:141] libmachine: (flannel-880673) DBG | domain flannel-880673 has defined MAC address 52:54:00:d6:0d:31 in network mk-flannel-880673
	I1014 20:17:38.937271  421087 main.go:141] libmachine: (flannel-880673) DBG | no network interface addresses found for domain flannel-880673 (source=lease)
	I1014 20:17:38.937297  421087 main.go:141] libmachine: (flannel-880673) DBG | trying to list again with source=arp
	I1014 20:17:38.937637  421087 main.go:141] libmachine: (flannel-880673) DBG | unable to find current IP address of domain flannel-880673 in network mk-flannel-880673 (interfaces detected: [])
	I1014 20:17:38.937678  421087 main.go:141] libmachine: (flannel-880673) DBG | I1014 20:17:38.937613  421144 retry.go:31] will retry after 1.860669329s: waiting for domain to come up
	I1014 20:17:39.160343  418230 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1014 20:17:39.176721  418230 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1014 20:17:39.203607  418230 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1014 20:17:39.203696  418230 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 20:17:39.203714  418230 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes enable-default-cni-880673 minikube.k8s.io/updated_at=2025_10_14T20_17_39_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=93e78e130b0f4e4236c23940daf4ba8b68c76662 minikube.k8s.io/name=enable-default-cni-880673 minikube.k8s.io/primary=true
	I1014 20:17:39.440085  418230 ops.go:34] apiserver oom_adj: -16
	I1014 20:17:39.440263  418230 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 20:17:39.940448  418230 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 20:17:40.440720  418230 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 20:17:40.940513  418230 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 20:17:41.440788  418230 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 20:17:41.940939  418230 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 20:17:42.441010  418230 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 20:17:42.940536  418230 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 20:17:43.029941  418230 kubeadm.go:1113] duration metric: took 3.826330212s to wait for elevateKubeSystemPrivileges
	I1014 20:17:43.029988  418230 kubeadm.go:402] duration metric: took 17.898921947s to StartCluster
	I1014 20:17:43.030016  418230 settings.go:142] acquiring lock: {Name:mkb488b5c777750ffd68a70b951fb5c68c216ad7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 20:17:43.030113  418230 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21409-364627/kubeconfig
	I1014 20:17:43.031904  418230 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-364627/kubeconfig: {Name:mkd77480770143eefcec102bea219ed6716fd3f5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 20:17:43.032222  418230 start.go:235] Will wait 15m0s for node &{Name: IP:192.168.72.117 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1014 20:17:43.032269  418230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1014 20:17:43.032292  418230 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1014 20:17:43.032407  418230 addons.go:69] Setting storage-provisioner=true in profile "enable-default-cni-880673"
	I1014 20:17:43.032422  418230 addons.go:238] Setting addon storage-provisioner=true in "enable-default-cni-880673"
	I1014 20:17:43.032463  418230 addons.go:69] Setting default-storageclass=true in profile "enable-default-cni-880673"
	I1014 20:17:43.032473  418230 host.go:66] Checking if "enable-default-cni-880673" exists ...
	I1014 20:17:43.032485  418230 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "enable-default-cni-880673"
	I1014 20:17:43.032490  418230 config.go:182] Loaded profile config "enable-default-cni-880673": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1014 20:17:43.032996  418230 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 20:17:43.033039  418230 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 20:17:43.033051  418230 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 20:17:43.033089  418230 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 20:17:43.037501  418230 out.go:179] * Verifying Kubernetes components...
	I1014 20:17:43.038992  418230 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 20:17:43.052189  418230 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39551
	I1014 20:17:43.052218  418230 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41315
	I1014 20:17:43.052848  418230 main.go:141] libmachine: () Calling .GetVersion
	I1014 20:17:43.052899  418230 main.go:141] libmachine: () Calling .GetVersion
	I1014 20:17:43.053421  418230 main.go:141] libmachine: Using API Version  1
	I1014 20:17:43.053449  418230 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 20:17:43.053693  418230 main.go:141] libmachine: Using API Version  1
	I1014 20:17:43.053718  418230 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 20:17:43.053804  418230 main.go:141] libmachine: () Calling .GetMachineName
	I1014 20:17:43.054063  418230 main.go:141] libmachine: () Calling .GetMachineName
	I1014 20:17:43.054246  418230 main.go:141] libmachine: (enable-default-cni-880673) Calling .GetState
	I1014 20:17:43.054403  418230 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 20:17:43.054451  418230 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 20:17:43.059677  418230 addons.go:238] Setting addon default-storageclass=true in "enable-default-cni-880673"
	I1014 20:17:43.059726  418230 host.go:66] Checking if "enable-default-cni-880673" exists ...
	I1014 20:17:43.060091  418230 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 20:17:43.060143  418230 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 20:17:43.074894  418230 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38151
	I1014 20:17:43.075565  418230 main.go:141] libmachine: () Calling .GetVersion
	I1014 20:17:43.076179  418230 main.go:141] libmachine: Using API Version  1
	I1014 20:17:43.076207  418230 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 20:17:43.076773  418230 main.go:141] libmachine: () Calling .GetMachineName
	I1014 20:17:43.077114  418230 main.go:141] libmachine: (enable-default-cni-880673) Calling .GetState
	I1014 20:17:43.078105  418230 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44993
	I1014 20:17:43.078709  418230 main.go:141] libmachine: () Calling .GetVersion
	I1014 20:17:43.079277  418230 main.go:141] libmachine: Using API Version  1
	I1014 20:17:43.079301  418230 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 20:17:43.079729  418230 main.go:141] libmachine: () Calling .GetMachineName
	I1014 20:17:43.080382  418230 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 20:17:43.080438  418230 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 20:17:43.080445  418230 main.go:141] libmachine: (enable-default-cni-880673) Calling .DriverName
	I1014 20:17:43.082407  418230 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1014 20:17:43.083573  418230 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1014 20:17:43.083596  418230 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1014 20:17:43.083626  418230 main.go:141] libmachine: (enable-default-cni-880673) Calling .GetSSHHostname
	I1014 20:17:43.088290  418230 main.go:141] libmachine: (enable-default-cni-880673) DBG | domain enable-default-cni-880673 has defined MAC address 52:54:00:e0:bd:aa in network mk-enable-default-cni-880673
	I1014 20:17:43.089020  418230 main.go:141] libmachine: (enable-default-cni-880673) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:bd:aa", ip: ""} in network mk-enable-default-cni-880673: {Iface:virbr4 ExpiryTime:2025-10-14 21:17:13 +0000 UTC Type:0 Mac:52:54:00:e0:bd:aa Iaid: IPaddr:192.168.72.117 Prefix:24 Hostname:enable-default-cni-880673 Clientid:01:52:54:00:e0:bd:aa}
	I1014 20:17:43.089061  418230 main.go:141] libmachine: (enable-default-cni-880673) DBG | domain enable-default-cni-880673 has defined IP address 192.168.72.117 and MAC address 52:54:00:e0:bd:aa in network mk-enable-default-cni-880673
	I1014 20:17:43.089382  418230 main.go:141] libmachine: (enable-default-cni-880673) Calling .GetSSHPort
	I1014 20:17:43.089653  418230 main.go:141] libmachine: (enable-default-cni-880673) Calling .GetSSHKeyPath
	I1014 20:17:43.089859  418230 main.go:141] libmachine: (enable-default-cni-880673) Calling .GetSSHUsername
	I1014 20:17:43.090060  418230 sshutil.go:53] new ssh client: &{IP:192.168.72.117 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21409-364627/.minikube/machines/enable-default-cni-880673/id_rsa Username:docker}
	I1014 20:17:43.101177  418230 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43427
	I1014 20:17:43.101898  418230 main.go:141] libmachine: () Calling .GetVersion
	I1014 20:17:43.102698  418230 main.go:141] libmachine: Using API Version  1
	I1014 20:17:43.102744  418230 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 20:17:43.103225  418230 main.go:141] libmachine: () Calling .GetMachineName
	I1014 20:17:43.103560  418230 main.go:141] libmachine: (enable-default-cni-880673) Calling .GetState
	I1014 20:17:43.106136  418230 main.go:141] libmachine: (enable-default-cni-880673) Calling .DriverName
	I1014 20:17:43.106436  418230 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1014 20:17:43.106456  418230 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1014 20:17:43.106479  418230 main.go:141] libmachine: (enable-default-cni-880673) Calling .GetSSHHostname
	I1014 20:17:43.111214  418230 main.go:141] libmachine: (enable-default-cni-880673) DBG | domain enable-default-cni-880673 has defined MAC address 52:54:00:e0:bd:aa in network mk-enable-default-cni-880673
	I1014 20:17:43.111979  418230 main.go:141] libmachine: (enable-default-cni-880673) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:bd:aa", ip: ""} in network mk-enable-default-cni-880673: {Iface:virbr4 ExpiryTime:2025-10-14 21:17:13 +0000 UTC Type:0 Mac:52:54:00:e0:bd:aa Iaid: IPaddr:192.168.72.117 Prefix:24 Hostname:enable-default-cni-880673 Clientid:01:52:54:00:e0:bd:aa}
	I1014 20:17:43.112006  418230 main.go:141] libmachine: (enable-default-cni-880673) DBG | domain enable-default-cni-880673 has defined IP address 192.168.72.117 and MAC address 52:54:00:e0:bd:aa in network mk-enable-default-cni-880673
	I1014 20:17:43.112371  418230 main.go:141] libmachine: (enable-default-cni-880673) Calling .GetSSHPort
	I1014 20:17:43.112678  418230 main.go:141] libmachine: (enable-default-cni-880673) Calling .GetSSHKeyPath
	I1014 20:17:43.112888  418230 main.go:141] libmachine: (enable-default-cni-880673) Calling .GetSSHUsername
	I1014 20:17:43.113065  418230 sshutil.go:53] new ssh client: &{IP:192.168.72.117 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21409-364627/.minikube/machines/enable-default-cni-880673/id_rsa Username:docker}
	I1014 20:17:43.261499  418230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.72.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1014 20:17:43.335764  418230 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1014 20:17:43.474105  418230 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1014 20:17:43.540259  418230 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1014 20:17:43.930623  418230 start.go:976] {"host.minikube.internal": 192.168.72.1} host record injected into CoreDNS's ConfigMap
	I1014 20:17:43.930746  418230 main.go:141] libmachine: Making call to close driver server
	I1014 20:17:43.930783  418230 main.go:141] libmachine: (enable-default-cni-880673) Calling .Close
	I1014 20:17:43.931135  418230 main.go:141] libmachine: Successfully made call to close driver server
	I1014 20:17:43.931156  418230 main.go:141] libmachine: Making call to close connection to plugin binary
	I1014 20:17:43.931171  418230 main.go:141] libmachine: Making call to close driver server
	I1014 20:17:43.931181  418230 main.go:141] libmachine: (enable-default-cni-880673) Calling .Close
	I1014 20:17:43.932134  418230 node_ready.go:35] waiting up to 15m0s for node "enable-default-cni-880673" to be "Ready" ...
	I1014 20:17:43.932297  418230 main.go:141] libmachine: (enable-default-cni-880673) DBG | Closing plugin on server side
	I1014 20:17:43.932355  418230 main.go:141] libmachine: Successfully made call to close driver server
	I1014 20:17:43.932364  418230 main.go:141] libmachine: Making call to close connection to plugin binary
	I1014 20:17:43.956853  418230 node_ready.go:49] node "enable-default-cni-880673" is "Ready"
	I1014 20:17:43.956891  418230 node_ready.go:38] duration metric: took 24.726793ms for node "enable-default-cni-880673" to be "Ready" ...
	I1014 20:17:43.956906  418230 api_server.go:52] waiting for apiserver process to appear ...
	I1014 20:17:43.957005  418230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 20:17:43.967856  418230 main.go:141] libmachine: Making call to close driver server
	I1014 20:17:43.967884  418230 main.go:141] libmachine: (enable-default-cni-880673) Calling .Close
	I1014 20:17:43.968219  418230 main.go:141] libmachine: (enable-default-cni-880673) DBG | Closing plugin on server side
	I1014 20:17:43.968265  418230 main.go:141] libmachine: Successfully made call to close driver server
	I1014 20:17:43.968273  418230 main.go:141] libmachine: Making call to close connection to plugin binary
	I1014 20:17:44.439687  418230 kapi.go:214] "coredns" deployment in "kube-system" namespace and "enable-default-cni-880673" context rescaled to 1 replicas
	I1014 20:17:44.514543  418230 api_server.go:72] duration metric: took 1.482276829s to wait for apiserver process to appear ...
	I1014 20:17:44.514577  418230 api_server.go:88] waiting for apiserver healthz status ...
	I1014 20:17:44.514600  418230 api_server.go:253] Checking apiserver healthz at https://192.168.72.117:8443/healthz ...
	I1014 20:17:44.515252  418230 main.go:141] libmachine: Making call to close driver server
	I1014 20:17:44.515327  418230 main.go:141] libmachine: (enable-default-cni-880673) Calling .Close
	I1014 20:17:44.515655  418230 main.go:141] libmachine: Successfully made call to close driver server
	I1014 20:17:44.515673  418230 main.go:141] libmachine: Making call to close connection to plugin binary
	I1014 20:17:44.515682  418230 main.go:141] libmachine: Making call to close driver server
	I1014 20:17:44.515691  418230 main.go:141] libmachine: (enable-default-cni-880673) Calling .Close
	I1014 20:17:44.516518  418230 main.go:141] libmachine: Successfully made call to close driver server
	I1014 20:17:44.516540  418230 main.go:141] libmachine: Making call to close connection to plugin binary
	I1014 20:17:44.516569  418230 main.go:141] libmachine: (enable-default-cni-880673) DBG | Closing plugin on server side
	I1014 20:17:44.519735  418230 out.go:179] * Enabled addons: default-storageclass, storage-provisioner
	I1014 20:17:40.799883  421087 main.go:141] libmachine: (flannel-880673) DBG | domain flannel-880673 has defined MAC address 52:54:00:d6:0d:31 in network mk-flannel-880673
	I1014 20:17:40.800545  421087 main.go:141] libmachine: (flannel-880673) DBG | no network interface addresses found for domain flannel-880673 (source=lease)
	I1014 20:17:40.800574  421087 main.go:141] libmachine: (flannel-880673) DBG | trying to list again with source=arp
	I1014 20:17:40.800971  421087 main.go:141] libmachine: (flannel-880673) DBG | unable to find current IP address of domain flannel-880673 in network mk-flannel-880673 (interfaces detected: [])
	I1014 20:17:40.801091  421087 main.go:141] libmachine: (flannel-880673) DBG | I1014 20:17:40.800938  421144 retry.go:31] will retry after 2.523760029s: waiting for domain to come up
	I1014 20:17:43.328085  421087 main.go:141] libmachine: (flannel-880673) DBG | domain flannel-880673 has defined MAC address 52:54:00:d6:0d:31 in network mk-flannel-880673
	I1014 20:17:43.328978  421087 main.go:141] libmachine: (flannel-880673) DBG | no network interface addresses found for domain flannel-880673 (source=lease)
	I1014 20:17:43.329008  421087 main.go:141] libmachine: (flannel-880673) DBG | trying to list again with source=arp
	I1014 20:17:43.329553  421087 main.go:141] libmachine: (flannel-880673) DBG | unable to find current IP address of domain flannel-880673 in network mk-flannel-880673 (interfaces detected: [])
	I1014 20:17:43.329587  421087 main.go:141] libmachine: (flannel-880673) DBG | I1014 20:17:43.329517  421144 retry.go:31] will retry after 3.135854458s: waiting for domain to come up
	I1014 20:17:44.520973  418230 addons.go:514] duration metric: took 1.488668063s for enable addons: enabled=[default-storageclass storage-provisioner]
	I1014 20:17:44.537377  418230 api_server.go:279] https://192.168.72.117:8443/healthz returned 200:
	ok
	I1014 20:17:44.538778  418230 api_server.go:141] control plane version: v1.34.1
	I1014 20:17:44.538817  418230 api_server.go:131] duration metric: took 24.228488ms to wait for apiserver health ...
	I1014 20:17:44.538829  418230 system_pods.go:43] waiting for kube-system pods to appear ...
	I1014 20:17:44.548431  418230 system_pods.go:59] 8 kube-system pods found
	I1014 20:17:44.548467  418230 system_pods.go:61] "coredns-66bc5c9577-489jr" [4369c2bf-dff4-4fa0-bdaa-aa27b36a6579] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1014 20:17:44.548480  418230 system_pods.go:61] "coredns-66bc5c9577-xp7vv" [ac189f38-a53f-4923-bfcb-eea2eca9a1c2] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1014 20:17:44.548491  418230 system_pods.go:61] "etcd-enable-default-cni-880673" [b81e6cb4-fa96-43f1-b280-c33dfc457749] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1014 20:17:44.548499  418230 system_pods.go:61] "kube-apiserver-enable-default-cni-880673" [2ba544f3-cb06-47cc-8a00-b5199b2f748f] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1014 20:17:44.548509  418230 system_pods.go:61] "kube-controller-manager-enable-default-cni-880673" [79e751b1-935b-4831-aa92-69e55520142e] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1014 20:17:44.548518  418230 system_pods.go:61] "kube-proxy-qm5zb" [83365521-5a92-4fb7-9ad1-653d046d8177] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1014 20:17:44.548546  418230 system_pods.go:61] "kube-scheduler-enable-default-cni-880673" [c0c210a0-2c5e-4e5f-af88-a43d7ab22b2b] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1014 20:17:44.548554  418230 system_pods.go:61] "storage-provisioner" [a7fd5b7c-9f8e-4629-93e6-0767ff496285] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1014 20:17:44.548564  418230 system_pods.go:74] duration metric: took 9.726813ms to wait for pod list to return data ...
	I1014 20:17:44.548575  418230 default_sa.go:34] waiting for default service account to be created ...
	I1014 20:17:44.556944  418230 default_sa.go:45] found service account: "default"
	I1014 20:17:44.556977  418230 default_sa.go:55] duration metric: took 8.393024ms for default service account to be created ...
	I1014 20:17:44.556993  418230 system_pods.go:116] waiting for k8s-apps to be running ...
	I1014 20:17:44.563518  418230 system_pods.go:86] 8 kube-system pods found
	I1014 20:17:44.563560  418230 system_pods.go:89] "coredns-66bc5c9577-489jr" [4369c2bf-dff4-4fa0-bdaa-aa27b36a6579] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1014 20:17:44.563570  418230 system_pods.go:89] "coredns-66bc5c9577-xp7vv" [ac189f38-a53f-4923-bfcb-eea2eca9a1c2] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1014 20:17:44.563577  418230 system_pods.go:89] "etcd-enable-default-cni-880673" [b81e6cb4-fa96-43f1-b280-c33dfc457749] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1014 20:17:44.563595  418230 system_pods.go:89] "kube-apiserver-enable-default-cni-880673" [2ba544f3-cb06-47cc-8a00-b5199b2f748f] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1014 20:17:44.563615  418230 system_pods.go:89] "kube-controller-manager-enable-default-cni-880673" [79e751b1-935b-4831-aa92-69e55520142e] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1014 20:17:44.563626  418230 system_pods.go:89] "kube-proxy-qm5zb" [83365521-5a92-4fb7-9ad1-653d046d8177] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1014 20:17:44.563639  418230 system_pods.go:89] "kube-scheduler-enable-default-cni-880673" [c0c210a0-2c5e-4e5f-af88-a43d7ab22b2b] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1014 20:17:44.563660  418230 system_pods.go:89] "storage-provisioner" [a7fd5b7c-9f8e-4629-93e6-0767ff496285] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1014 20:17:44.563710  418230 retry.go:31] will retry after 246.086816ms: missing components: kube-dns, kube-proxy
	I1014 20:17:44.817361  418230 system_pods.go:86] 8 kube-system pods found
	I1014 20:17:44.817404  418230 system_pods.go:89] "coredns-66bc5c9577-489jr" [4369c2bf-dff4-4fa0-bdaa-aa27b36a6579] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1014 20:17:44.817418  418230 system_pods.go:89] "coredns-66bc5c9577-xp7vv" [ac189f38-a53f-4923-bfcb-eea2eca9a1c2] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1014 20:17:44.817431  418230 system_pods.go:89] "etcd-enable-default-cni-880673" [b81e6cb4-fa96-43f1-b280-c33dfc457749] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1014 20:17:44.817444  418230 system_pods.go:89] "kube-apiserver-enable-default-cni-880673" [2ba544f3-cb06-47cc-8a00-b5199b2f748f] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1014 20:17:44.817454  418230 system_pods.go:89] "kube-controller-manager-enable-default-cni-880673" [79e751b1-935b-4831-aa92-69e55520142e] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1014 20:17:44.817466  418230 system_pods.go:89] "kube-proxy-qm5zb" [83365521-5a92-4fb7-9ad1-653d046d8177] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1014 20:17:44.817475  418230 system_pods.go:89] "kube-scheduler-enable-default-cni-880673" [c0c210a0-2c5e-4e5f-af88-a43d7ab22b2b] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1014 20:17:44.817486  418230 system_pods.go:89] "storage-provisioner" [a7fd5b7c-9f8e-4629-93e6-0767ff496285] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1014 20:17:44.817513  418230 retry.go:31] will retry after 303.170286ms: missing components: kube-dns, kube-proxy
	I1014 20:17:45.127070  418230 system_pods.go:86] 8 kube-system pods found
	I1014 20:17:45.127115  418230 system_pods.go:89] "coredns-66bc5c9577-489jr" [4369c2bf-dff4-4fa0-bdaa-aa27b36a6579] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1014 20:17:45.127126  418230 system_pods.go:89] "coredns-66bc5c9577-xp7vv" [ac189f38-a53f-4923-bfcb-eea2eca9a1c2] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1014 20:17:45.127135  418230 system_pods.go:89] "etcd-enable-default-cni-880673" [b81e6cb4-fa96-43f1-b280-c33dfc457749] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1014 20:17:45.127145  418230 system_pods.go:89] "kube-apiserver-enable-default-cni-880673" [2ba544f3-cb06-47cc-8a00-b5199b2f748f] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1014 20:17:45.127155  418230 system_pods.go:89] "kube-controller-manager-enable-default-cni-880673" [79e751b1-935b-4831-aa92-69e55520142e] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1014 20:17:45.127165  418230 system_pods.go:89] "kube-proxy-qm5zb" [83365521-5a92-4fb7-9ad1-653d046d8177] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1014 20:17:45.127176  418230 system_pods.go:89] "kube-scheduler-enable-default-cni-880673" [c0c210a0-2c5e-4e5f-af88-a43d7ab22b2b] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1014 20:17:45.127184  418230 system_pods.go:89] "storage-provisioner" [a7fd5b7c-9f8e-4629-93e6-0767ff496285] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1014 20:17:45.127206  418230 retry.go:31] will retry after 461.46354ms: missing components: kube-dns, kube-proxy
	I1014 20:17:45.594052  418230 system_pods.go:86] 7 kube-system pods found
	I1014 20:17:45.594089  418230 system_pods.go:89] "coredns-66bc5c9577-489jr" [4369c2bf-dff4-4fa0-bdaa-aa27b36a6579] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1014 20:17:45.594100  418230 system_pods.go:89] "etcd-enable-default-cni-880673" [b81e6cb4-fa96-43f1-b280-c33dfc457749] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1014 20:17:45.594111  418230 system_pods.go:89] "kube-apiserver-enable-default-cni-880673" [2ba544f3-cb06-47cc-8a00-b5199b2f748f] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1014 20:17:45.594120  418230 system_pods.go:89] "kube-controller-manager-enable-default-cni-880673" [79e751b1-935b-4831-aa92-69e55520142e] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1014 20:17:45.594127  418230 system_pods.go:89] "kube-proxy-qm5zb" [83365521-5a92-4fb7-9ad1-653d046d8177] Running
	I1014 20:17:45.594134  418230 system_pods.go:89] "kube-scheduler-enable-default-cni-880673" [c0c210a0-2c5e-4e5f-af88-a43d7ab22b2b] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1014 20:17:45.594138  418230 system_pods.go:89] "storage-provisioner" [a7fd5b7c-9f8e-4629-93e6-0767ff496285] Running
	I1014 20:17:45.594149  418230 system_pods.go:126] duration metric: took 1.037148596s to wait for k8s-apps to be running ...
	I1014 20:17:45.594158  418230 system_svc.go:44] waiting for kubelet service to be running ....
	I1014 20:17:45.594210  418230 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1014 20:17:45.614925  418230 system_svc.go:56] duration metric: took 20.752932ms WaitForService to wait for kubelet
	I1014 20:17:45.614960  418230 kubeadm.go:586] duration metric: took 2.582700783s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1014 20:17:45.614986  418230 node_conditions.go:102] verifying NodePressure condition ...
	I1014 20:17:45.618645  418230 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1014 20:17:45.618682  418230 node_conditions.go:123] node cpu capacity is 2
	I1014 20:17:45.618697  418230 node_conditions.go:105] duration metric: took 3.70399ms to run NodePressure ...
	I1014 20:17:45.618713  418230 start.go:241] waiting for startup goroutines ...
	I1014 20:17:45.618723  418230 start.go:246] waiting for cluster config update ...
	I1014 20:17:45.618738  418230 start.go:255] writing updated cluster config ...
	I1014 20:17:45.619091  418230 ssh_runner.go:195] Run: rm -f paused
	I1014 20:17:45.624700  418230 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1014 20:17:45.629756  418230 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-489jr" in "kube-system" namespace to be "Ready" or be gone ...
	I1014 20:17:46.466713  421087 main.go:141] libmachine: (flannel-880673) DBG | domain flannel-880673 has defined MAC address 52:54:00:d6:0d:31 in network mk-flannel-880673
	I1014 20:17:46.467493  421087 main.go:141] libmachine: (flannel-880673) DBG | no network interface addresses found for domain flannel-880673 (source=lease)
	I1014 20:17:46.467523  421087 main.go:141] libmachine: (flannel-880673) DBG | trying to list again with source=arp
	I1014 20:17:46.467849  421087 main.go:141] libmachine: (flannel-880673) DBG | unable to find current IP address of domain flannel-880673 in network mk-flannel-880673 (interfaces detected: [])
	I1014 20:17:46.467923  421087 main.go:141] libmachine: (flannel-880673) DBG | I1014 20:17:46.467854  421144 retry.go:31] will retry after 3.337883952s: waiting for domain to come up
	I1014 20:17:49.808402  421087 main.go:141] libmachine: (flannel-880673) DBG | domain flannel-880673 has defined MAC address 52:54:00:d6:0d:31 in network mk-flannel-880673
	I1014 20:17:49.809278  421087 main.go:141] libmachine: (flannel-880673) DBG | domain flannel-880673 has current primary IP address 192.168.39.78 and MAC address 52:54:00:d6:0d:31 in network mk-flannel-880673
	I1014 20:17:49.809328  421087 main.go:141] libmachine: (flannel-880673) found domain IP: 192.168.39.78
	I1014 20:17:49.809341  421087 main.go:141] libmachine: (flannel-880673) reserving static IP address...
	I1014 20:17:49.809868  421087 main.go:141] libmachine: (flannel-880673) DBG | unable to find host DHCP lease matching {name: "flannel-880673", mac: "52:54:00:d6:0d:31", ip: "192.168.39.78"} in network mk-flannel-880673
	I1014 20:17:51.614250  421402 start.go:364] duration metric: took 18.513922492s to acquireMachinesLock for "bridge-880673"
	I1014 20:17:51.614349  421402 start.go:93] Provisioning new machine with config: &{Name:bridge-880673 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.1 ClusterName:bridge-880673 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Binar
yMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1014 20:17:51.614493  421402 start.go:125] createHost starting for "" (driver="kvm2")
	W1014 20:17:47.636567  418230 pod_ready.go:104] pod "coredns-66bc5c9577-489jr" is not "Ready", error: <nil>
	W1014 20:17:49.643515  418230 pod_ready.go:104] pod "coredns-66bc5c9577-489jr" is not "Ready", error: <nil>
	I1014 20:17:51.617418  421402 out.go:252] * Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1014 20:17:51.617665  421402 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 20:17:51.617720  421402 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 20:17:51.636823  421402 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36455
	I1014 20:17:51.637339  421402 main.go:141] libmachine: () Calling .GetVersion
	I1014 20:17:51.637929  421402 main.go:141] libmachine: Using API Version  1
	I1014 20:17:51.637957  421402 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 20:17:51.638388  421402 main.go:141] libmachine: () Calling .GetMachineName
	I1014 20:17:51.638627  421402 main.go:141] libmachine: (bridge-880673) Calling .GetMachineName
	I1014 20:17:51.638800  421402 main.go:141] libmachine: (bridge-880673) Calling .DriverName
	I1014 20:17:51.638988  421402 start.go:159] libmachine.API.Create for "bridge-880673" (driver="kvm2")
	I1014 20:17:51.639020  421402 client.go:168] LocalClient.Create starting
	I1014 20:17:51.639055  421402 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21409-364627/.minikube/certs/ca.pem
	I1014 20:17:51.639092  421402 main.go:141] libmachine: Decoding PEM data...
	I1014 20:17:51.639111  421402 main.go:141] libmachine: Parsing certificate...
	I1014 20:17:51.639181  421402 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21409-364627/.minikube/certs/cert.pem
	I1014 20:17:51.639218  421402 main.go:141] libmachine: Decoding PEM data...
	I1014 20:17:51.639252  421402 main.go:141] libmachine: Parsing certificate...
	I1014 20:17:51.639278  421402 main.go:141] libmachine: Running pre-create checks...
	I1014 20:17:51.639290  421402 main.go:141] libmachine: (bridge-880673) Calling .PreCreateCheck
	I1014 20:17:51.639675  421402 main.go:141] libmachine: (bridge-880673) Calling .GetConfigRaw
	I1014 20:17:51.640139  421402 main.go:141] libmachine: Creating machine...
	I1014 20:17:51.640157  421402 main.go:141] libmachine: (bridge-880673) Calling .Create
	I1014 20:17:51.640289  421402 main.go:141] libmachine: (bridge-880673) creating domain...
	I1014 20:17:51.640351  421402 main.go:141] libmachine: (bridge-880673) creating network...
	I1014 20:17:51.641677  421402 main.go:141] libmachine: (bridge-880673) DBG | found existing default network
	I1014 20:17:51.641912  421402 main.go:141] libmachine: (bridge-880673) DBG | <network connections='3'>
	I1014 20:17:51.641935  421402 main.go:141] libmachine: (bridge-880673) DBG |   <name>default</name>
	I1014 20:17:51.641947  421402 main.go:141] libmachine: (bridge-880673) DBG |   <uuid>c61344c2-dba2-46dd-a21a-34776d235985</uuid>
	I1014 20:17:51.641971  421402 main.go:141] libmachine: (bridge-880673) DBG |   <forward mode='nat'>
	I1014 20:17:51.641984  421402 main.go:141] libmachine: (bridge-880673) DBG |     <nat>
	I1014 20:17:51.641993  421402 main.go:141] libmachine: (bridge-880673) DBG |       <port start='1024' end='65535'/>
	I1014 20:17:51.642005  421402 main.go:141] libmachine: (bridge-880673) DBG |     </nat>
	I1014 20:17:51.642012  421402 main.go:141] libmachine: (bridge-880673) DBG |   </forward>
	I1014 20:17:51.642025  421402 main.go:141] libmachine: (bridge-880673) DBG |   <bridge name='virbr0' stp='on' delay='0'/>
	I1014 20:17:51.642037  421402 main.go:141] libmachine: (bridge-880673) DBG |   <mac address='52:54:00:10:a2:1d'/>
	I1014 20:17:51.642047  421402 main.go:141] libmachine: (bridge-880673) DBG |   <ip address='192.168.122.1' netmask='255.255.255.0'>
	I1014 20:17:51.642054  421402 main.go:141] libmachine: (bridge-880673) DBG |     <dhcp>
	I1014 20:17:51.642086  421402 main.go:141] libmachine: (bridge-880673) DBG |       <range start='192.168.122.2' end='192.168.122.254'/>
	I1014 20:17:51.642109  421402 main.go:141] libmachine: (bridge-880673) DBG |     </dhcp>
	I1014 20:17:51.642131  421402 main.go:141] libmachine: (bridge-880673) DBG |   </ip>
	I1014 20:17:51.642138  421402 main.go:141] libmachine: (bridge-880673) DBG | </network>
	I1014 20:17:51.642149  421402 main.go:141] libmachine: (bridge-880673) DBG | 
	I1014 20:17:51.643014  421402 main.go:141] libmachine: (bridge-880673) DBG | I1014 20:17:51.642867  422003 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:5d:dc:bd} reservation:<nil>}
	I1014 20:17:51.643548  421402 main.go:141] libmachine: (bridge-880673) DBG | I1014 20:17:51.643456  422003 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:05:7c:de} reservation:<nil>}
	I1014 20:17:51.644332  421402 main.go:141] libmachine: (bridge-880673) DBG | I1014 20:17:51.644240  422003 network.go:206] using free private subnet 192.168.61.0/24: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00025eb90}
	I1014 20:17:51.644380  421402 main.go:141] libmachine: (bridge-880673) DBG | defining private network:
	I1014 20:17:51.644402  421402 main.go:141] libmachine: (bridge-880673) DBG | 
	I1014 20:17:51.644416  421402 main.go:141] libmachine: (bridge-880673) DBG | <network>
	I1014 20:17:51.644428  421402 main.go:141] libmachine: (bridge-880673) DBG |   <name>mk-bridge-880673</name>
	I1014 20:17:51.644441  421402 main.go:141] libmachine: (bridge-880673) DBG |   <dns enable='no'/>
	I1014 20:17:51.644456  421402 main.go:141] libmachine: (bridge-880673) DBG |   <ip address='192.168.61.1' netmask='255.255.255.0'>
	I1014 20:17:51.644468  421402 main.go:141] libmachine: (bridge-880673) DBG |     <dhcp>
	I1014 20:17:51.644478  421402 main.go:141] libmachine: (bridge-880673) DBG |       <range start='192.168.61.2' end='192.168.61.253'/>
	I1014 20:17:51.644502  421402 main.go:141] libmachine: (bridge-880673) DBG |     </dhcp>
	I1014 20:17:51.644524  421402 main.go:141] libmachine: (bridge-880673) DBG |   </ip>
	I1014 20:17:51.644536  421402 main.go:141] libmachine: (bridge-880673) DBG | </network>
	I1014 20:17:51.644546  421402 main.go:141] libmachine: (bridge-880673) DBG | 
	I1014 20:17:51.650988  421402 main.go:141] libmachine: (bridge-880673) DBG | creating private network mk-bridge-880673 192.168.61.0/24...
	I1014 20:17:51.727429  421402 main.go:141] libmachine: (bridge-880673) DBG | private network mk-bridge-880673 192.168.61.0/24 created
	I1014 20:17:51.727727  421402 main.go:141] libmachine: (bridge-880673) DBG | <network>
	I1014 20:17:51.727743  421402 main.go:141] libmachine: (bridge-880673) DBG |   <name>mk-bridge-880673</name>
	I1014 20:17:51.727754  421402 main.go:141] libmachine: (bridge-880673) DBG |   <uuid>ecd63ac0-f4e0-4f34-a66c-58986d00c010</uuid>
	I1014 20:17:51.727765  421402 main.go:141] libmachine: (bridge-880673) DBG |   <bridge name='virbr3' stp='on' delay='0'/>
	I1014 20:17:51.727777  421402 main.go:141] libmachine: (bridge-880673) setting up store path in /home/jenkins/minikube-integration/21409-364627/.minikube/machines/bridge-880673 ...
	I1014 20:17:51.727797  421402 main.go:141] libmachine: (bridge-880673) building disk image from file:///home/jenkins/minikube-integration/21409-364627/.minikube/cache/iso/amd64/minikube-v1.37.0-1758198818-20370-amd64.iso
	I1014 20:17:51.727810  421402 main.go:141] libmachine: (bridge-880673) DBG |   <mac address='52:54:00:71:72:11'/>
	I1014 20:17:51.727820  421402 main.go:141] libmachine: (bridge-880673) DBG |   <dns enable='no'/>
	I1014 20:17:51.727826  421402 main.go:141] libmachine: (bridge-880673) DBG |   <ip address='192.168.61.1' netmask='255.255.255.0'>
	I1014 20:17:51.727833  421402 main.go:141] libmachine: (bridge-880673) DBG |     <dhcp>
	I1014 20:17:51.727842  421402 main.go:141] libmachine: (bridge-880673) DBG |       <range start='192.168.61.2' end='192.168.61.253'/>
	I1014 20:17:51.727889  421402 main.go:141] libmachine: (bridge-880673) DBG |     </dhcp>
	I1014 20:17:51.727920  421402 main.go:141] libmachine: (bridge-880673) DBG |   </ip>
	I1014 20:17:51.727951  421402 main.go:141] libmachine: (bridge-880673) Downloading /home/jenkins/minikube-integration/21409-364627/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/21409-364627/.minikube/cache/iso/amd64/minikube-v1.37.0-1758198818-20370-amd64.iso...
	I1014 20:17:51.727966  421402 main.go:141] libmachine: (bridge-880673) DBG | </network>
	I1014 20:17:51.727988  421402 main.go:141] libmachine: (bridge-880673) DBG | 
	I1014 20:17:51.728007  421402 main.go:141] libmachine: (bridge-880673) DBG | I1014 20:17:51.727751  422003 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/21409-364627/.minikube
	I1014 20:17:52.004395  421402 main.go:141] libmachine: (bridge-880673) DBG | I1014 20:17:52.004250  422003 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/21409-364627/.minikube/machines/bridge-880673/id_rsa...
	I1014 20:17:52.087668  421402 main.go:141] libmachine: (bridge-880673) DBG | I1014 20:17:52.087546  422003 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/21409-364627/.minikube/machines/bridge-880673/bridge-880673.rawdisk...
	I1014 20:17:52.087697  421402 main.go:141] libmachine: (bridge-880673) DBG | Writing magic tar header
	I1014 20:17:52.087707  421402 main.go:141] libmachine: (bridge-880673) DBG | Writing SSH key tar header
	I1014 20:17:52.087803  421402 main.go:141] libmachine: (bridge-880673) DBG | I1014 20:17:52.087707  422003 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/21409-364627/.minikube/machines/bridge-880673 ...
	I1014 20:17:52.087897  421402 main.go:141] libmachine: (bridge-880673) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21409-364627/.minikube/machines/bridge-880673
	I1014 20:17:52.087924  421402 main.go:141] libmachine: (bridge-880673) setting executable bit set on /home/jenkins/minikube-integration/21409-364627/.minikube/machines/bridge-880673 (perms=drwx------)
	I1014 20:17:52.087937  421402 main.go:141] libmachine: (bridge-880673) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21409-364627/.minikube/machines
	I1014 20:17:52.087957  421402 main.go:141] libmachine: (bridge-880673) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21409-364627/.minikube
	I1014 20:17:52.087971  421402 main.go:141] libmachine: (bridge-880673) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21409-364627
	I1014 20:17:52.087993  421402 main.go:141] libmachine: (bridge-880673) DBG | checking permissions on dir: /home/jenkins/minikube-integration
	I1014 20:17:52.088005  421402 main.go:141] libmachine: (bridge-880673) DBG | checking permissions on dir: /home/jenkins
	I1014 20:17:52.088019  421402 main.go:141] libmachine: (bridge-880673) setting executable bit set on /home/jenkins/minikube-integration/21409-364627/.minikube/machines (perms=drwxr-xr-x)
	I1014 20:17:52.088038  421402 main.go:141] libmachine: (bridge-880673) setting executable bit set on /home/jenkins/minikube-integration/21409-364627/.minikube (perms=drwxr-xr-x)
	I1014 20:17:52.088051  421402 main.go:141] libmachine: (bridge-880673) setting executable bit set on /home/jenkins/minikube-integration/21409-364627 (perms=drwxrwxr-x)
	I1014 20:17:52.088064  421402 main.go:141] libmachine: (bridge-880673) setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1014 20:17:52.088076  421402 main.go:141] libmachine: (bridge-880673) DBG | checking permissions on dir: /home
	I1014 20:17:52.088086  421402 main.go:141] libmachine: (bridge-880673) setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1014 20:17:52.088162  421402 main.go:141] libmachine: (bridge-880673) DBG | skipping /home - not owner
	I1014 20:17:52.088179  421402 main.go:141] libmachine: (bridge-880673) defining domain...
	I1014 20:17:52.089554  421402 main.go:141] libmachine: (bridge-880673) defining domain using XML: 
	I1014 20:17:52.089570  421402 main.go:141] libmachine: (bridge-880673) <domain type='kvm'>
	I1014 20:17:52.089576  421402 main.go:141] libmachine: (bridge-880673)   <name>bridge-880673</name>
	I1014 20:17:52.089580  421402 main.go:141] libmachine: (bridge-880673)   <memory unit='MiB'>3072</memory>
	I1014 20:17:52.089585  421402 main.go:141] libmachine: (bridge-880673)   <vcpu>2</vcpu>
	I1014 20:17:52.089589  421402 main.go:141] libmachine: (bridge-880673)   <features>
	I1014 20:17:52.089593  421402 main.go:141] libmachine: (bridge-880673)     <acpi/>
	I1014 20:17:52.089597  421402 main.go:141] libmachine: (bridge-880673)     <apic/>
	I1014 20:17:52.089614  421402 main.go:141] libmachine: (bridge-880673)     <pae/>
	I1014 20:17:52.089621  421402 main.go:141] libmachine: (bridge-880673)   </features>
	I1014 20:17:52.089626  421402 main.go:141] libmachine: (bridge-880673)   <cpu mode='host-passthrough'>
	I1014 20:17:52.089630  421402 main.go:141] libmachine: (bridge-880673)   </cpu>
	I1014 20:17:52.089635  421402 main.go:141] libmachine: (bridge-880673)   <os>
	I1014 20:17:52.089639  421402 main.go:141] libmachine: (bridge-880673)     <type>hvm</type>
	I1014 20:17:52.089643  421402 main.go:141] libmachine: (bridge-880673)     <boot dev='cdrom'/>
	I1014 20:17:52.089655  421402 main.go:141] libmachine: (bridge-880673)     <boot dev='hd'/>
	I1014 20:17:52.089663  421402 main.go:141] libmachine: (bridge-880673)     <bootmenu enable='no'/>
	I1014 20:17:52.089672  421402 main.go:141] libmachine: (bridge-880673)   </os>
	I1014 20:17:52.089683  421402 main.go:141] libmachine: (bridge-880673)   <devices>
	I1014 20:17:52.089690  421402 main.go:141] libmachine: (bridge-880673)     <disk type='file' device='cdrom'>
	I1014 20:17:52.089698  421402 main.go:141] libmachine: (bridge-880673)       <source file='/home/jenkins/minikube-integration/21409-364627/.minikube/machines/bridge-880673/boot2docker.iso'/>
	I1014 20:17:52.089705  421402 main.go:141] libmachine: (bridge-880673)       <target dev='hdc' bus='scsi'/>
	I1014 20:17:52.089710  421402 main.go:141] libmachine: (bridge-880673)       <readonly/>
	I1014 20:17:52.089713  421402 main.go:141] libmachine: (bridge-880673)     </disk>
	I1014 20:17:52.089719  421402 main.go:141] libmachine: (bridge-880673)     <disk type='file' device='disk'>
	I1014 20:17:52.089726  421402 main.go:141] libmachine: (bridge-880673)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1014 20:17:52.089738  421402 main.go:141] libmachine: (bridge-880673)       <source file='/home/jenkins/minikube-integration/21409-364627/.minikube/machines/bridge-880673/bridge-880673.rawdisk'/>
	I1014 20:17:52.089749  421402 main.go:141] libmachine: (bridge-880673)       <target dev='hda' bus='virtio'/>
	I1014 20:17:52.089758  421402 main.go:141] libmachine: (bridge-880673)     </disk>
	I1014 20:17:52.089767  421402 main.go:141] libmachine: (bridge-880673)     <interface type='network'>
	I1014 20:17:52.089774  421402 main.go:141] libmachine: (bridge-880673)       <source network='mk-bridge-880673'/>
	I1014 20:17:52.089780  421402 main.go:141] libmachine: (bridge-880673)       <model type='virtio'/>
	I1014 20:17:52.089784  421402 main.go:141] libmachine: (bridge-880673)     </interface>
	I1014 20:17:52.089791  421402 main.go:141] libmachine: (bridge-880673)     <interface type='network'>
	I1014 20:17:52.089796  421402 main.go:141] libmachine: (bridge-880673)       <source network='default'/>
	I1014 20:17:52.089804  421402 main.go:141] libmachine: (bridge-880673)       <model type='virtio'/>
	I1014 20:17:52.089812  421402 main.go:141] libmachine: (bridge-880673)     </interface>
	I1014 20:17:52.089824  421402 main.go:141] libmachine: (bridge-880673)     <serial type='pty'>
	I1014 20:17:52.089832  421402 main.go:141] libmachine: (bridge-880673)       <target port='0'/>
	I1014 20:17:52.089841  421402 main.go:141] libmachine: (bridge-880673)     </serial>
	I1014 20:17:52.089850  421402 main.go:141] libmachine: (bridge-880673)     <console type='pty'>
	I1014 20:17:52.089864  421402 main.go:141] libmachine: (bridge-880673)       <target type='serial' port='0'/>
	I1014 20:17:52.089872  421402 main.go:141] libmachine: (bridge-880673)     </console>
	I1014 20:17:52.089876  421402 main.go:141] libmachine: (bridge-880673)     <rng model='virtio'>
	I1014 20:17:52.089889  421402 main.go:141] libmachine: (bridge-880673)       <backend model='random'>/dev/random</backend>
	I1014 20:17:52.089895  421402 main.go:141] libmachine: (bridge-880673)     </rng>
	I1014 20:17:52.089912  421402 main.go:141] libmachine: (bridge-880673)   </devices>
	I1014 20:17:52.089925  421402 main.go:141] libmachine: (bridge-880673) </domain>
	I1014 20:17:52.089935  421402 main.go:141] libmachine: (bridge-880673) 
	I1014 20:17:52.095220  421402 main.go:141] libmachine: (bridge-880673) DBG | domain bridge-880673 has defined MAC address 52:54:00:35:9e:82 in network default
	I1014 20:17:52.095963  421402 main.go:141] libmachine: (bridge-880673) DBG | domain bridge-880673 has defined MAC address 52:54:00:21:00:20 in network mk-bridge-880673
	I1014 20:17:52.096002  421402 main.go:141] libmachine: (bridge-880673) starting domain...
	I1014 20:17:52.096015  421402 main.go:141] libmachine: (bridge-880673) ensuring networks are active...
	I1014 20:17:52.096955  421402 main.go:141] libmachine: (bridge-880673) Ensuring network default is active
	I1014 20:17:52.097463  421402 main.go:141] libmachine: (bridge-880673) Ensuring network mk-bridge-880673 is active
	I1014 20:17:52.098259  421402 main.go:141] libmachine: (bridge-880673) getting domain XML...
	I1014 20:17:52.099848  421402 main.go:141] libmachine: (bridge-880673) DBG | starting domain XML:
	I1014 20:17:52.099871  421402 main.go:141] libmachine: (bridge-880673) DBG | <domain type='kvm'>
	I1014 20:17:52.099883  421402 main.go:141] libmachine: (bridge-880673) DBG |   <name>bridge-880673</name>
	I1014 20:17:52.099893  421402 main.go:141] libmachine: (bridge-880673) DBG |   <uuid>b2be856d-0946-4eb5-be70-c1a4965dcc84</uuid>
	I1014 20:17:52.099906  421402 main.go:141] libmachine: (bridge-880673) DBG |   <memory unit='KiB'>3145728</memory>
	I1014 20:17:52.099914  421402 main.go:141] libmachine: (bridge-880673) DBG |   <currentMemory unit='KiB'>3145728</currentMemory>
	I1014 20:17:52.099927  421402 main.go:141] libmachine: (bridge-880673) DBG |   <vcpu placement='static'>2</vcpu>
	I1014 20:17:52.099938  421402 main.go:141] libmachine: (bridge-880673) DBG |   <os>
	I1014 20:17:52.099948  421402 main.go:141] libmachine: (bridge-880673) DBG |     <type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
	I1014 20:17:52.099972  421402 main.go:141] libmachine: (bridge-880673) DBG |     <boot dev='cdrom'/>
	I1014 20:17:52.099983  421402 main.go:141] libmachine: (bridge-880673) DBG |     <boot dev='hd'/>
	I1014 20:17:52.099990  421402 main.go:141] libmachine: (bridge-880673) DBG |     <bootmenu enable='no'/>
	I1014 20:17:52.099999  421402 main.go:141] libmachine: (bridge-880673) DBG |   </os>
	I1014 20:17:52.100006  421402 main.go:141] libmachine: (bridge-880673) DBG |   <features>
	I1014 20:17:52.100044  421402 main.go:141] libmachine: (bridge-880673) DBG |     <acpi/>
	I1014 20:17:52.100071  421402 main.go:141] libmachine: (bridge-880673) DBG |     <apic/>
	I1014 20:17:52.100093  421402 main.go:141] libmachine: (bridge-880673) DBG |     <pae/>
	I1014 20:17:52.100120  421402 main.go:141] libmachine: (bridge-880673) DBG |   </features>
	I1014 20:17:52.100137  421402 main.go:141] libmachine: (bridge-880673) DBG |   <cpu mode='host-passthrough' check='none' migratable='on'/>
	I1014 20:17:52.100152  421402 main.go:141] libmachine: (bridge-880673) DBG |   <clock offset='utc'/>
	I1014 20:17:52.100167  421402 main.go:141] libmachine: (bridge-880673) DBG |   <on_poweroff>destroy</on_poweroff>
	I1014 20:17:52.100188  421402 main.go:141] libmachine: (bridge-880673) DBG |   <on_reboot>restart</on_reboot>
	I1014 20:17:52.100210  421402 main.go:141] libmachine: (bridge-880673) DBG |   <on_crash>destroy</on_crash>
	I1014 20:17:52.100226  421402 main.go:141] libmachine: (bridge-880673) DBG |   <devices>
	I1014 20:17:52.100241  421402 main.go:141] libmachine: (bridge-880673) DBG |     <emulator>/usr/bin/qemu-system-x86_64</emulator>
	I1014 20:17:52.100260  421402 main.go:141] libmachine: (bridge-880673) DBG |     <disk type='file' device='cdrom'>
	I1014 20:17:52.100276  421402 main.go:141] libmachine: (bridge-880673) DBG |       <driver name='qemu' type='raw'/>
	I1014 20:17:52.100292  421402 main.go:141] libmachine: (bridge-880673) DBG |       <source file='/home/jenkins/minikube-integration/21409-364627/.minikube/machines/bridge-880673/boot2docker.iso'/>
	I1014 20:17:52.100307  421402 main.go:141] libmachine: (bridge-880673) DBG |       <target dev='hdc' bus='scsi'/>
	I1014 20:17:52.100344  421402 main.go:141] libmachine: (bridge-880673) DBG |       <readonly/>
	I1014 20:17:52.100375  421402 main.go:141] libmachine: (bridge-880673) DBG |       <address type='drive' controller='0' bus='0' target='0' unit='2'/>
	I1014 20:17:52.100427  421402 main.go:141] libmachine: (bridge-880673) DBG |     </disk>
	I1014 20:17:52.100446  421402 main.go:141] libmachine: (bridge-880673) DBG |     <disk type='file' device='disk'>
	I1014 20:17:52.100454  421402 main.go:141] libmachine: (bridge-880673) DBG |       <driver name='qemu' type='raw' io='threads'/>
	I1014 20:17:52.100467  421402 main.go:141] libmachine: (bridge-880673) DBG |       <source file='/home/jenkins/minikube-integration/21409-364627/.minikube/machines/bridge-880673/bridge-880673.rawdisk'/>
	I1014 20:17:52.100480  421402 main.go:141] libmachine: (bridge-880673) DBG |       <target dev='hda' bus='virtio'/>
	I1014 20:17:52.100496  421402 main.go:141] libmachine: (bridge-880673) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
	I1014 20:17:52.100513  421402 main.go:141] libmachine: (bridge-880673) DBG |     </disk>
	I1014 20:17:52.100523  421402 main.go:141] libmachine: (bridge-880673) DBG |     <controller type='usb' index='0' model='piix3-uhci'>
	I1014 20:17:52.100534  421402 main.go:141] libmachine: (bridge-880673) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
	I1014 20:17:52.100555  421402 main.go:141] libmachine: (bridge-880673) DBG |     </controller>
	I1014 20:17:52.100573  421402 main.go:141] libmachine: (bridge-880673) DBG |     <controller type='pci' index='0' model='pci-root'/>
	I1014 20:17:52.100587  421402 main.go:141] libmachine: (bridge-880673) DBG |     <controller type='scsi' index='0' model='lsilogic'>
	I1014 20:17:52.100599  421402 main.go:141] libmachine: (bridge-880673) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
	I1014 20:17:52.100610  421402 main.go:141] libmachine: (bridge-880673) DBG |     </controller>
	I1014 20:17:52.100618  421402 main.go:141] libmachine: (bridge-880673) DBG |     <interface type='network'>
	I1014 20:17:52.100629  421402 main.go:141] libmachine: (bridge-880673) DBG |       <mac address='52:54:00:21:00:20'/>
	I1014 20:17:52.100639  421402 main.go:141] libmachine: (bridge-880673) DBG |       <source network='mk-bridge-880673'/>
	I1014 20:17:52.100681  421402 main.go:141] libmachine: (bridge-880673) DBG |       <model type='virtio'/>
	I1014 20:17:52.100709  421402 main.go:141] libmachine: (bridge-880673) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
	I1014 20:17:52.100718  421402 main.go:141] libmachine: (bridge-880673) DBG |     </interface>
	I1014 20:17:52.100728  421402 main.go:141] libmachine: (bridge-880673) DBG |     <interface type='network'>
	I1014 20:17:52.100737  421402 main.go:141] libmachine: (bridge-880673) DBG |       <mac address='52:54:00:35:9e:82'/>
	I1014 20:17:52.100747  421402 main.go:141] libmachine: (bridge-880673) DBG |       <source network='default'/>
	I1014 20:17:52.100755  421402 main.go:141] libmachine: (bridge-880673) DBG |       <model type='virtio'/>
	I1014 20:17:52.100768  421402 main.go:141] libmachine: (bridge-880673) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
	I1014 20:17:52.100790  421402 main.go:141] libmachine: (bridge-880673) DBG |     </interface>
	I1014 20:17:52.100807  421402 main.go:141] libmachine: (bridge-880673) DBG |     <serial type='pty'>
	I1014 20:17:52.100818  421402 main.go:141] libmachine: (bridge-880673) DBG |       <target type='isa-serial' port='0'>
	I1014 20:17:52.100828  421402 main.go:141] libmachine: (bridge-880673) DBG |         <model name='isa-serial'/>
	I1014 20:17:52.100836  421402 main.go:141] libmachine: (bridge-880673) DBG |       </target>
	I1014 20:17:52.100846  421402 main.go:141] libmachine: (bridge-880673) DBG |     </serial>
	I1014 20:17:52.100854  421402 main.go:141] libmachine: (bridge-880673) DBG |     <console type='pty'>
	I1014 20:17:52.100863  421402 main.go:141] libmachine: (bridge-880673) DBG |       <target type='serial' port='0'/>
	I1014 20:17:52.100871  421402 main.go:141] libmachine: (bridge-880673) DBG |     </console>
	I1014 20:17:52.100885  421402 main.go:141] libmachine: (bridge-880673) DBG |     <input type='mouse' bus='ps2'/>
	I1014 20:17:52.100898  421402 main.go:141] libmachine: (bridge-880673) DBG |     <input type='keyboard' bus='ps2'/>
	I1014 20:17:52.100906  421402 main.go:141] libmachine: (bridge-880673) DBG |     <audio id='1' type='none'/>
	I1014 20:17:52.100919  421402 main.go:141] libmachine: (bridge-880673) DBG |     <memballoon model='virtio'>
	I1014 20:17:52.100931  421402 main.go:141] libmachine: (bridge-880673) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
	I1014 20:17:52.100946  421402 main.go:141] libmachine: (bridge-880673) DBG |     </memballoon>
	I1014 20:17:52.100960  421402 main.go:141] libmachine: (bridge-880673) DBG |     <rng model='virtio'>
	I1014 20:17:52.100991  421402 main.go:141] libmachine: (bridge-880673) DBG |       <backend model='random'>/dev/random</backend>
	I1014 20:17:52.101015  421402 main.go:141] libmachine: (bridge-880673) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
	I1014 20:17:52.101027  421402 main.go:141] libmachine: (bridge-880673) DBG |     </rng>
	I1014 20:17:52.101036  421402 main.go:141] libmachine: (bridge-880673) DBG |   </devices>
	I1014 20:17:52.101045  421402 main.go:141] libmachine: (bridge-880673) DBG | </domain>
	I1014 20:17:52.101054  421402 main.go:141] libmachine: (bridge-880673) DBG | 
	I1014 20:17:50.101459  421087 main.go:141] libmachine: (flannel-880673) reserved static IP address 192.168.39.78 for domain flannel-880673
	I1014 20:17:50.101509  421087 main.go:141] libmachine: (flannel-880673) DBG | Getting to WaitForSSH function...
	I1014 20:17:50.101518  421087 main.go:141] libmachine: (flannel-880673) waiting for SSH...
	I1014 20:17:50.105228  421087 main.go:141] libmachine: (flannel-880673) DBG | domain flannel-880673 has defined MAC address 52:54:00:d6:0d:31 in network mk-flannel-880673
	I1014 20:17:50.105867  421087 main.go:141] libmachine: (flannel-880673) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:0d:31", ip: ""} in network mk-flannel-880673: {Iface:virbr2 ExpiryTime:2025-10-14 21:17:46 +0000 UTC Type:0 Mac:52:54:00:d6:0d:31 Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:minikube Clientid:01:52:54:00:d6:0d:31}
	I1014 20:17:50.105896  421087 main.go:141] libmachine: (flannel-880673) DBG | domain flannel-880673 has defined IP address 192.168.39.78 and MAC address 52:54:00:d6:0d:31 in network mk-flannel-880673
	I1014 20:17:50.106044  421087 main.go:141] libmachine: (flannel-880673) DBG | Using SSH client type: external
	I1014 20:17:50.106075  421087 main.go:141] libmachine: (flannel-880673) DBG | Using SSH private key: /home/jenkins/minikube-integration/21409-364627/.minikube/machines/flannel-880673/id_rsa (-rw-------)
	I1014 20:17:50.106104  421087 main.go:141] libmachine: (flannel-880673) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.78 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/21409-364627/.minikube/machines/flannel-880673/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1014 20:17:50.106118  421087 main.go:141] libmachine: (flannel-880673) DBG | About to run SSH command:
	I1014 20:17:50.106130  421087 main.go:141] libmachine: (flannel-880673) DBG | exit 0
	I1014 20:17:50.238769  421087 main.go:141] libmachine: (flannel-880673) DBG | SSH cmd err, output: <nil>: 
	I1014 20:17:50.239127  421087 main.go:141] libmachine: (flannel-880673) domain creation complete
	I1014 20:17:50.239637  421087 main.go:141] libmachine: (flannel-880673) Calling .GetConfigRaw
	I1014 20:17:50.240432  421087 main.go:141] libmachine: (flannel-880673) Calling .DriverName
	I1014 20:17:50.240681  421087 main.go:141] libmachine: (flannel-880673) Calling .DriverName
	I1014 20:17:50.240878  421087 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1014 20:17:50.240893  421087 main.go:141] libmachine: (flannel-880673) Calling .GetState
	I1014 20:17:50.242891  421087 main.go:141] libmachine: Detecting operating system of created instance...
	I1014 20:17:50.242908  421087 main.go:141] libmachine: Waiting for SSH to be available...
	I1014 20:17:50.242918  421087 main.go:141] libmachine: Getting to WaitForSSH function...
	I1014 20:17:50.242927  421087 main.go:141] libmachine: (flannel-880673) Calling .GetSSHHostname
	I1014 20:17:50.246273  421087 main.go:141] libmachine: (flannel-880673) DBG | domain flannel-880673 has defined MAC address 52:54:00:d6:0d:31 in network mk-flannel-880673
	I1014 20:17:50.246749  421087 main.go:141] libmachine: (flannel-880673) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:0d:31", ip: ""} in network mk-flannel-880673: {Iface:virbr2 ExpiryTime:2025-10-14 21:17:46 +0000 UTC Type:0 Mac:52:54:00:d6:0d:31 Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:flannel-880673 Clientid:01:52:54:00:d6:0d:31}
	I1014 20:17:50.246772  421087 main.go:141] libmachine: (flannel-880673) DBG | domain flannel-880673 has defined IP address 192.168.39.78 and MAC address 52:54:00:d6:0d:31 in network mk-flannel-880673
	I1014 20:17:50.246939  421087 main.go:141] libmachine: (flannel-880673) Calling .GetSSHPort
	I1014 20:17:50.247138  421087 main.go:141] libmachine: (flannel-880673) Calling .GetSSHKeyPath
	I1014 20:17:50.247284  421087 main.go:141] libmachine: (flannel-880673) Calling .GetSSHKeyPath
	I1014 20:17:50.247443  421087 main.go:141] libmachine: (flannel-880673) Calling .GetSSHUsername
	I1014 20:17:50.247618  421087 main.go:141] libmachine: Using SSH client type: native
	I1014 20:17:50.247940  421087 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.78 22 <nil> <nil>}
	I1014 20:17:50.247959  421087 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1014 20:17:50.357163  421087 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1014 20:17:50.357194  421087 main.go:141] libmachine: Detecting the provisioner...
	I1014 20:17:50.357204  421087 main.go:141] libmachine: (flannel-880673) Calling .GetSSHHostname
	I1014 20:17:50.360955  421087 main.go:141] libmachine: (flannel-880673) DBG | domain flannel-880673 has defined MAC address 52:54:00:d6:0d:31 in network mk-flannel-880673
	I1014 20:17:50.361450  421087 main.go:141] libmachine: (flannel-880673) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:0d:31", ip: ""} in network mk-flannel-880673: {Iface:virbr2 ExpiryTime:2025-10-14 21:17:46 +0000 UTC Type:0 Mac:52:54:00:d6:0d:31 Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:flannel-880673 Clientid:01:52:54:00:d6:0d:31}
	I1014 20:17:50.361521  421087 main.go:141] libmachine: (flannel-880673) DBG | domain flannel-880673 has defined IP address 192.168.39.78 and MAC address 52:54:00:d6:0d:31 in network mk-flannel-880673
	I1014 20:17:50.361680  421087 main.go:141] libmachine: (flannel-880673) Calling .GetSSHPort
	I1014 20:17:50.361903  421087 main.go:141] libmachine: (flannel-880673) Calling .GetSSHKeyPath
	I1014 20:17:50.362061  421087 main.go:141] libmachine: (flannel-880673) Calling .GetSSHKeyPath
	I1014 20:17:50.362240  421087 main.go:141] libmachine: (flannel-880673) Calling .GetSSHUsername
	I1014 20:17:50.362533  421087 main.go:141] libmachine: Using SSH client type: native
	I1014 20:17:50.362848  421087 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.78 22 <nil> <nil>}
	I1014 20:17:50.362864  421087 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1014 20:17:50.470979  421087 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2025.02-dirty
	ID=buildroot
	VERSION_ID=2025.02
	PRETTY_NAME="Buildroot 2025.02"
	
	I1014 20:17:50.471037  421087 main.go:141] libmachine: found compatible host: buildroot
	I1014 20:17:50.471044  421087 main.go:141] libmachine: Provisioning with buildroot...
	I1014 20:17:50.471052  421087 main.go:141] libmachine: (flannel-880673) Calling .GetMachineName
	I1014 20:17:50.471338  421087 buildroot.go:166] provisioning hostname "flannel-880673"
	I1014 20:17:50.471379  421087 main.go:141] libmachine: (flannel-880673) Calling .GetMachineName
	I1014 20:17:50.471617  421087 main.go:141] libmachine: (flannel-880673) Calling .GetSSHHostname
	I1014 20:17:50.474844  421087 main.go:141] libmachine: (flannel-880673) DBG | domain flannel-880673 has defined MAC address 52:54:00:d6:0d:31 in network mk-flannel-880673
	I1014 20:17:50.475290  421087 main.go:141] libmachine: (flannel-880673) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:0d:31", ip: ""} in network mk-flannel-880673: {Iface:virbr2 ExpiryTime:2025-10-14 21:17:46 +0000 UTC Type:0 Mac:52:54:00:d6:0d:31 Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:flannel-880673 Clientid:01:52:54:00:d6:0d:31}
	I1014 20:17:50.475334  421087 main.go:141] libmachine: (flannel-880673) DBG | domain flannel-880673 has defined IP address 192.168.39.78 and MAC address 52:54:00:d6:0d:31 in network mk-flannel-880673
	I1014 20:17:50.475473  421087 main.go:141] libmachine: (flannel-880673) Calling .GetSSHPort
	I1014 20:17:50.475684  421087 main.go:141] libmachine: (flannel-880673) Calling .GetSSHKeyPath
	I1014 20:17:50.475858  421087 main.go:141] libmachine: (flannel-880673) Calling .GetSSHKeyPath
	I1014 20:17:50.476027  421087 main.go:141] libmachine: (flannel-880673) Calling .GetSSHUsername
	I1014 20:17:50.476233  421087 main.go:141] libmachine: Using SSH client type: native
	I1014 20:17:50.476512  421087 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.78 22 <nil> <nil>}
	I1014 20:17:50.476530  421087 main.go:141] libmachine: About to run SSH command:
	sudo hostname flannel-880673 && echo "flannel-880673" | sudo tee /etc/hostname
	I1014 20:17:50.597679  421087 main.go:141] libmachine: SSH cmd err, output: <nil>: flannel-880673
	
	I1014 20:17:50.597732  421087 main.go:141] libmachine: (flannel-880673) Calling .GetSSHHostname
	I1014 20:17:50.601501  421087 main.go:141] libmachine: (flannel-880673) DBG | domain flannel-880673 has defined MAC address 52:54:00:d6:0d:31 in network mk-flannel-880673
	I1014 20:17:50.601966  421087 main.go:141] libmachine: (flannel-880673) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:0d:31", ip: ""} in network mk-flannel-880673: {Iface:virbr2 ExpiryTime:2025-10-14 21:17:46 +0000 UTC Type:0 Mac:52:54:00:d6:0d:31 Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:flannel-880673 Clientid:01:52:54:00:d6:0d:31}
	I1014 20:17:50.601994  421087 main.go:141] libmachine: (flannel-880673) DBG | domain flannel-880673 has defined IP address 192.168.39.78 and MAC address 52:54:00:d6:0d:31 in network mk-flannel-880673
	I1014 20:17:50.602373  421087 main.go:141] libmachine: (flannel-880673) Calling .GetSSHPort
	I1014 20:17:50.602616  421087 main.go:141] libmachine: (flannel-880673) Calling .GetSSHKeyPath
	I1014 20:17:50.602849  421087 main.go:141] libmachine: (flannel-880673) Calling .GetSSHKeyPath
	I1014 20:17:50.603026  421087 main.go:141] libmachine: (flannel-880673) Calling .GetSSHUsername
	I1014 20:17:50.603233  421087 main.go:141] libmachine: Using SSH client type: native
	I1014 20:17:50.603517  421087 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.78 22 <nil> <nil>}
	I1014 20:17:50.603552  421087 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sflannel-880673' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 flannel-880673/g' /etc/hosts;
				else 
					echo '127.0.1.1 flannel-880673' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1014 20:17:50.720937  421087 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1014 20:17:50.720967  421087 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21409-364627/.minikube CaCertPath:/home/jenkins/minikube-integration/21409-364627/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21409-364627/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21409-364627/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21409-364627/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21409-364627/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21409-364627/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21409-364627/.minikube}
	I1014 20:17:50.721002  421087 buildroot.go:174] setting up certificates
	I1014 20:17:50.721015  421087 provision.go:84] configureAuth start
	I1014 20:17:50.721029  421087 main.go:141] libmachine: (flannel-880673) Calling .GetMachineName
	I1014 20:17:50.721462  421087 main.go:141] libmachine: (flannel-880673) Calling .GetIP
	I1014 20:17:50.724906  421087 main.go:141] libmachine: (flannel-880673) DBG | domain flannel-880673 has defined MAC address 52:54:00:d6:0d:31 in network mk-flannel-880673
	I1014 20:17:50.725295  421087 main.go:141] libmachine: (flannel-880673) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:0d:31", ip: ""} in network mk-flannel-880673: {Iface:virbr2 ExpiryTime:2025-10-14 21:17:46 +0000 UTC Type:0 Mac:52:54:00:d6:0d:31 Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:flannel-880673 Clientid:01:52:54:00:d6:0d:31}
	I1014 20:17:50.725354  421087 main.go:141] libmachine: (flannel-880673) DBG | domain flannel-880673 has defined IP address 192.168.39.78 and MAC address 52:54:00:d6:0d:31 in network mk-flannel-880673
	I1014 20:17:50.725547  421087 main.go:141] libmachine: (flannel-880673) Calling .GetSSHHostname
	I1014 20:17:50.728177  421087 main.go:141] libmachine: (flannel-880673) DBG | domain flannel-880673 has defined MAC address 52:54:00:d6:0d:31 in network mk-flannel-880673
	I1014 20:17:50.728630  421087 main.go:141] libmachine: (flannel-880673) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:0d:31", ip: ""} in network mk-flannel-880673: {Iface:virbr2 ExpiryTime:2025-10-14 21:17:46 +0000 UTC Type:0 Mac:52:54:00:d6:0d:31 Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:flannel-880673 Clientid:01:52:54:00:d6:0d:31}
	I1014 20:17:50.728660  421087 main.go:141] libmachine: (flannel-880673) DBG | domain flannel-880673 has defined IP address 192.168.39.78 and MAC address 52:54:00:d6:0d:31 in network mk-flannel-880673
	I1014 20:17:50.728837  421087 provision.go:143] copyHostCerts
	I1014 20:17:50.728911  421087 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-364627/.minikube/key.pem, removing ...
	I1014 20:17:50.728932  421087 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-364627/.minikube/key.pem
	I1014 20:17:50.729026  421087 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-364627/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21409-364627/.minikube/key.pem (1675 bytes)
	I1014 20:17:50.729171  421087 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-364627/.minikube/ca.pem, removing ...
	I1014 20:17:50.729183  421087 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-364627/.minikube/ca.pem
	I1014 20:17:50.729225  421087 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-364627/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21409-364627/.minikube/ca.pem (1082 bytes)
	I1014 20:17:50.729343  421087 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-364627/.minikube/cert.pem, removing ...
	I1014 20:17:50.729357  421087 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-364627/.minikube/cert.pem
	I1014 20:17:50.729409  421087 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-364627/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21409-364627/.minikube/cert.pem (1123 bytes)
	I1014 20:17:50.729511  421087 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21409-364627/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21409-364627/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21409-364627/.minikube/certs/ca-key.pem org=jenkins.flannel-880673 san=[127.0.0.1 192.168.39.78 flannel-880673 localhost minikube]
	I1014 20:17:50.937434  421087 provision.go:177] copyRemoteCerts
	I1014 20:17:50.937529  421087 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1014 20:17:50.937567  421087 main.go:141] libmachine: (flannel-880673) Calling .GetSSHHostname
	I1014 20:17:50.940661  421087 main.go:141] libmachine: (flannel-880673) DBG | domain flannel-880673 has defined MAC address 52:54:00:d6:0d:31 in network mk-flannel-880673
	I1014 20:17:50.941077  421087 main.go:141] libmachine: (flannel-880673) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:0d:31", ip: ""} in network mk-flannel-880673: {Iface:virbr2 ExpiryTime:2025-10-14 21:17:46 +0000 UTC Type:0 Mac:52:54:00:d6:0d:31 Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:flannel-880673 Clientid:01:52:54:00:d6:0d:31}
	I1014 20:17:50.941106  421087 main.go:141] libmachine: (flannel-880673) DBG | domain flannel-880673 has defined IP address 192.168.39.78 and MAC address 52:54:00:d6:0d:31 in network mk-flannel-880673
	I1014 20:17:50.941293  421087 main.go:141] libmachine: (flannel-880673) Calling .GetSSHPort
	I1014 20:17:50.941546  421087 main.go:141] libmachine: (flannel-880673) Calling .GetSSHKeyPath
	I1014 20:17:50.941735  421087 main.go:141] libmachine: (flannel-880673) Calling .GetSSHUsername
	I1014 20:17:50.941947  421087 sshutil.go:53] new ssh client: &{IP:192.168.39.78 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21409-364627/.minikube/machines/flannel-880673/id_rsa Username:docker}
	I1014 20:17:51.027245  421087 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-364627/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1014 20:17:51.057431  421087 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-364627/.minikube/machines/server.pem --> /etc/docker/server.pem (1212 bytes)
	I1014 20:17:51.087620  421087 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-364627/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1014 20:17:51.117032  421087 provision.go:87] duration metric: took 395.999388ms to configureAuth
	I1014 20:17:51.117078  421087 buildroot.go:189] setting minikube options for container-runtime
	I1014 20:17:51.117230  421087 config.go:182] Loaded profile config "flannel-880673": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1014 20:17:51.117349  421087 main.go:141] libmachine: (flannel-880673) Calling .GetSSHHostname
	I1014 20:17:51.120410  421087 main.go:141] libmachine: (flannel-880673) DBG | domain flannel-880673 has defined MAC address 52:54:00:d6:0d:31 in network mk-flannel-880673
	I1014 20:17:51.120743  421087 main.go:141] libmachine: (flannel-880673) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:0d:31", ip: ""} in network mk-flannel-880673: {Iface:virbr2 ExpiryTime:2025-10-14 21:17:46 +0000 UTC Type:0 Mac:52:54:00:d6:0d:31 Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:flannel-880673 Clientid:01:52:54:00:d6:0d:31}
	I1014 20:17:51.120768  421087 main.go:141] libmachine: (flannel-880673) DBG | domain flannel-880673 has defined IP address 192.168.39.78 and MAC address 52:54:00:d6:0d:31 in network mk-flannel-880673
	I1014 20:17:51.120992  421087 main.go:141] libmachine: (flannel-880673) Calling .GetSSHPort
	I1014 20:17:51.121252  421087 main.go:141] libmachine: (flannel-880673) Calling .GetSSHKeyPath
	I1014 20:17:51.121462  421087 main.go:141] libmachine: (flannel-880673) Calling .GetSSHKeyPath
	I1014 20:17:51.121640  421087 main.go:141] libmachine: (flannel-880673) Calling .GetSSHUsername
	I1014 20:17:51.121892  421087 main.go:141] libmachine: Using SSH client type: native
	I1014 20:17:51.122177  421087 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.78 22 <nil> <nil>}
	I1014 20:17:51.122203  421087 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1014 20:17:51.359141  421087 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1014 20:17:51.359176  421087 main.go:141] libmachine: Checking connection to Docker...
	I1014 20:17:51.359188  421087 main.go:141] libmachine: (flannel-880673) Calling .GetURL
	I1014 20:17:51.360877  421087 main.go:141] libmachine: (flannel-880673) DBG | using libvirt version 8000000
	I1014 20:17:51.363941  421087 main.go:141] libmachine: (flannel-880673) DBG | domain flannel-880673 has defined MAC address 52:54:00:d6:0d:31 in network mk-flannel-880673
	I1014 20:17:51.364391  421087 main.go:141] libmachine: (flannel-880673) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:0d:31", ip: ""} in network mk-flannel-880673: {Iface:virbr2 ExpiryTime:2025-10-14 21:17:46 +0000 UTC Type:0 Mac:52:54:00:d6:0d:31 Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:flannel-880673 Clientid:01:52:54:00:d6:0d:31}
	I1014 20:17:51.364421  421087 main.go:141] libmachine: (flannel-880673) DBG | domain flannel-880673 has defined IP address 192.168.39.78 and MAC address 52:54:00:d6:0d:31 in network mk-flannel-880673
	I1014 20:17:51.364677  421087 main.go:141] libmachine: Docker is up and running!
	I1014 20:17:51.364693  421087 main.go:141] libmachine: Reticulating splines...
	I1014 20:17:51.364702  421087 client.go:171] duration metric: took 21.333870837s to LocalClient.Create
	I1014 20:17:51.364755  421087 start.go:167] duration metric: took 21.333952273s to libmachine.API.Create "flannel-880673"
	I1014 20:17:51.364772  421087 start.go:293] postStartSetup for "flannel-880673" (driver="kvm2")
	I1014 20:17:51.364785  421087 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1014 20:17:51.364811  421087 main.go:141] libmachine: (flannel-880673) Calling .DriverName
	I1014 20:17:51.365093  421087 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1014 20:17:51.365122  421087 main.go:141] libmachine: (flannel-880673) Calling .GetSSHHostname
	I1014 20:17:51.368038  421087 main.go:141] libmachine: (flannel-880673) DBG | domain flannel-880673 has defined MAC address 52:54:00:d6:0d:31 in network mk-flannel-880673
	I1014 20:17:51.368451  421087 main.go:141] libmachine: (flannel-880673) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:0d:31", ip: ""} in network mk-flannel-880673: {Iface:virbr2 ExpiryTime:2025-10-14 21:17:46 +0000 UTC Type:0 Mac:52:54:00:d6:0d:31 Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:flannel-880673 Clientid:01:52:54:00:d6:0d:31}
	I1014 20:17:51.368482  421087 main.go:141] libmachine: (flannel-880673) DBG | domain flannel-880673 has defined IP address 192.168.39.78 and MAC address 52:54:00:d6:0d:31 in network mk-flannel-880673
	I1014 20:17:51.368691  421087 main.go:141] libmachine: (flannel-880673) Calling .GetSSHPort
	I1014 20:17:51.368870  421087 main.go:141] libmachine: (flannel-880673) Calling .GetSSHKeyPath
	I1014 20:17:51.369049  421087 main.go:141] libmachine: (flannel-880673) Calling .GetSSHUsername
	I1014 20:17:51.369172  421087 sshutil.go:53] new ssh client: &{IP:192.168.39.78 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21409-364627/.minikube/machines/flannel-880673/id_rsa Username:docker}
	I1014 20:17:51.454055  421087 ssh_runner.go:195] Run: cat /etc/os-release
	I1014 20:17:51.459382  421087 info.go:137] Remote host: Buildroot 2025.02
	I1014 20:17:51.459411  421087 filesync.go:126] Scanning /home/jenkins/minikube-integration/21409-364627/.minikube/addons for local assets ...
	I1014 20:17:51.459480  421087 filesync.go:126] Scanning /home/jenkins/minikube-integration/21409-364627/.minikube/files for local assets ...
	I1014 20:17:51.459555  421087 filesync.go:149] local asset: /home/jenkins/minikube-integration/21409-364627/.minikube/files/etc/ssl/certs/3686342.pem -> 3686342.pem in /etc/ssl/certs
	I1014 20:17:51.459644  421087 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1014 20:17:51.471818  421087 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-364627/.minikube/files/etc/ssl/certs/3686342.pem --> /etc/ssl/certs/3686342.pem (1708 bytes)
	I1014 20:17:51.500829  421087 start.go:296] duration metric: took 136.037282ms for postStartSetup
	I1014 20:17:51.500899  421087 main.go:141] libmachine: (flannel-880673) Calling .GetConfigRaw
	I1014 20:17:51.501695  421087 main.go:141] libmachine: (flannel-880673) Calling .GetIP
	I1014 20:17:51.504654  421087 main.go:141] libmachine: (flannel-880673) DBG | domain flannel-880673 has defined MAC address 52:54:00:d6:0d:31 in network mk-flannel-880673
	I1014 20:17:51.505104  421087 main.go:141] libmachine: (flannel-880673) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:0d:31", ip: ""} in network mk-flannel-880673: {Iface:virbr2 ExpiryTime:2025-10-14 21:17:46 +0000 UTC Type:0 Mac:52:54:00:d6:0d:31 Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:flannel-880673 Clientid:01:52:54:00:d6:0d:31}
	I1014 20:17:51.505134  421087 main.go:141] libmachine: (flannel-880673) DBG | domain flannel-880673 has defined IP address 192.168.39.78 and MAC address 52:54:00:d6:0d:31 in network mk-flannel-880673
	I1014 20:17:51.505480  421087 profile.go:143] Saving config to /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/flannel-880673/config.json ...
	I1014 20:17:51.505690  421087 start.go:128] duration metric: took 21.496576305s to createHost
	I1014 20:17:51.505714  421087 main.go:141] libmachine: (flannel-880673) Calling .GetSSHHostname
	I1014 20:17:51.508879  421087 main.go:141] libmachine: (flannel-880673) DBG | domain flannel-880673 has defined MAC address 52:54:00:d6:0d:31 in network mk-flannel-880673
	I1014 20:17:51.509305  421087 main.go:141] libmachine: (flannel-880673) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:0d:31", ip: ""} in network mk-flannel-880673: {Iface:virbr2 ExpiryTime:2025-10-14 21:17:46 +0000 UTC Type:0 Mac:52:54:00:d6:0d:31 Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:flannel-880673 Clientid:01:52:54:00:d6:0d:31}
	I1014 20:17:51.509350  421087 main.go:141] libmachine: (flannel-880673) DBG | domain flannel-880673 has defined IP address 192.168.39.78 and MAC address 52:54:00:d6:0d:31 in network mk-flannel-880673
	I1014 20:17:51.509541  421087 main.go:141] libmachine: (flannel-880673) Calling .GetSSHPort
	I1014 20:17:51.509750  421087 main.go:141] libmachine: (flannel-880673) Calling .GetSSHKeyPath
	I1014 20:17:51.510035  421087 main.go:141] libmachine: (flannel-880673) Calling .GetSSHKeyPath
	I1014 20:17:51.510221  421087 main.go:141] libmachine: (flannel-880673) Calling .GetSSHUsername
	I1014 20:17:51.510420  421087 main.go:141] libmachine: Using SSH client type: native
	I1014 20:17:51.510686  421087 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.78 22 <nil> <nil>}
	I1014 20:17:51.510702  421087 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1014 20:17:51.614004  421087 main.go:141] libmachine: SSH cmd err, output: <nil>: 1760473071.583554781
	
	I1014 20:17:51.614069  421087 fix.go:216] guest clock: 1760473071.583554781
	I1014 20:17:51.614085  421087 fix.go:229] Guest: 2025-10-14 20:17:51.583554781 +0000 UTC Remote: 2025-10-14 20:17:51.50570252 +0000 UTC m=+21.644925851 (delta=77.852261ms)
	I1014 20:17:51.614130  421087 fix.go:200] guest clock delta is within tolerance: 77.852261ms
	I1014 20:17:51.614141  421087 start.go:83] releasing machines lock for "flannel-880673", held for 21.605088741s
	I1014 20:17:51.614185  421087 main.go:141] libmachine: (flannel-880673) Calling .DriverName
	I1014 20:17:51.614505  421087 main.go:141] libmachine: (flannel-880673) Calling .GetIP
	I1014 20:17:51.617635  421087 main.go:141] libmachine: (flannel-880673) DBG | domain flannel-880673 has defined MAC address 52:54:00:d6:0d:31 in network mk-flannel-880673
	I1014 20:17:51.618186  421087 main.go:141] libmachine: (flannel-880673) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:0d:31", ip: ""} in network mk-flannel-880673: {Iface:virbr2 ExpiryTime:2025-10-14 21:17:46 +0000 UTC Type:0 Mac:52:54:00:d6:0d:31 Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:flannel-880673 Clientid:01:52:54:00:d6:0d:31}
	I1014 20:17:51.618235  421087 main.go:141] libmachine: (flannel-880673) DBG | domain flannel-880673 has defined IP address 192.168.39.78 and MAC address 52:54:00:d6:0d:31 in network mk-flannel-880673
	I1014 20:17:51.618444  421087 main.go:141] libmachine: (flannel-880673) Calling .DriverName
	I1014 20:17:51.619001  421087 main.go:141] libmachine: (flannel-880673) Calling .DriverName
	I1014 20:17:51.619200  421087 main.go:141] libmachine: (flannel-880673) Calling .DriverName
	I1014 20:17:51.619355  421087 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1014 20:17:51.619418  421087 main.go:141] libmachine: (flannel-880673) Calling .GetSSHHostname
	I1014 20:17:51.619424  421087 ssh_runner.go:195] Run: cat /version.json
	I1014 20:17:51.619439  421087 main.go:141] libmachine: (flannel-880673) Calling .GetSSHHostname
	I1014 20:17:51.622937  421087 main.go:141] libmachine: (flannel-880673) DBG | domain flannel-880673 has defined MAC address 52:54:00:d6:0d:31 in network mk-flannel-880673
	I1014 20:17:51.622971  421087 main.go:141] libmachine: (flannel-880673) DBG | domain flannel-880673 has defined MAC address 52:54:00:d6:0d:31 in network mk-flannel-880673
	I1014 20:17:51.623493  421087 main.go:141] libmachine: (flannel-880673) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:0d:31", ip: ""} in network mk-flannel-880673: {Iface:virbr2 ExpiryTime:2025-10-14 21:17:46 +0000 UTC Type:0 Mac:52:54:00:d6:0d:31 Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:flannel-880673 Clientid:01:52:54:00:d6:0d:31}
	I1014 20:17:51.623550  421087 main.go:141] libmachine: (flannel-880673) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:0d:31", ip: ""} in network mk-flannel-880673: {Iface:virbr2 ExpiryTime:2025-10-14 21:17:46 +0000 UTC Type:0 Mac:52:54:00:d6:0d:31 Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:flannel-880673 Clientid:01:52:54:00:d6:0d:31}
	I1014 20:17:51.623586  421087 main.go:141] libmachine: (flannel-880673) DBG | domain flannel-880673 has defined IP address 192.168.39.78 and MAC address 52:54:00:d6:0d:31 in network mk-flannel-880673
	I1014 20:17:51.623603  421087 main.go:141] libmachine: (flannel-880673) DBG | domain flannel-880673 has defined IP address 192.168.39.78 and MAC address 52:54:00:d6:0d:31 in network mk-flannel-880673
	I1014 20:17:51.623798  421087 main.go:141] libmachine: (flannel-880673) Calling .GetSSHPort
	I1014 20:17:51.623910  421087 main.go:141] libmachine: (flannel-880673) Calling .GetSSHPort
	I1014 20:17:51.624021  421087 main.go:141] libmachine: (flannel-880673) Calling .GetSSHKeyPath
	I1014 20:17:51.624023  421087 main.go:141] libmachine: (flannel-880673) Calling .GetSSHKeyPath
	I1014 20:17:51.624244  421087 main.go:141] libmachine: (flannel-880673) Calling .GetSSHUsername
	I1014 20:17:51.624328  421087 main.go:141] libmachine: (flannel-880673) Calling .GetSSHUsername
	I1014 20:17:51.624403  421087 sshutil.go:53] new ssh client: &{IP:192.168.39.78 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21409-364627/.minikube/machines/flannel-880673/id_rsa Username:docker}
	I1014 20:17:51.624522  421087 sshutil.go:53] new ssh client: &{IP:192.168.39.78 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21409-364627/.minikube/machines/flannel-880673/id_rsa Username:docker}
	I1014 20:17:51.706447  421087 ssh_runner.go:195] Run: systemctl --version
	I1014 20:17:51.742651  421087 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1014 20:17:51.906255  421087 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1014 20:17:51.913515  421087 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1014 20:17:51.913622  421087 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1014 20:17:51.933476  421087 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1014 20:17:51.933501  421087 start.go:495] detecting cgroup driver to use...
	I1014 20:17:51.933557  421087 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1014 20:17:51.953531  421087 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1014 20:17:51.971263  421087 docker.go:218] disabling cri-docker service (if available) ...
	I1014 20:17:51.971349  421087 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1014 20:17:51.989240  421087 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1014 20:17:52.006381  421087 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1014 20:17:52.164206  421087 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1014 20:17:52.395429  421087 docker.go:234] disabling docker service ...
	I1014 20:17:52.395502  421087 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1014 20:17:52.418523  421087 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1014 20:17:52.434793  421087 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1014 20:17:52.591989  421087 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1014 20:17:52.743031  421087 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1014 20:17:52.761625  421087 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1014 20:17:52.788904  421087 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1014 20:17:52.788959  421087 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 20:17:52.803433  421087 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1014 20:17:52.803500  421087 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 20:17:52.819575  421087 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 20:17:52.834511  421087 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 20:17:52.848951  421087 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1014 20:17:52.862556  421087 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 20:17:52.874877  421087 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 20:17:52.895550  421087 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 20:17:52.909411  421087 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1014 20:17:52.920167  421087 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1014 20:17:52.920235  421087 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1014 20:17:52.940776  421087 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1014 20:17:52.956779  421087 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 20:17:53.103029  421087 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1014 20:17:53.226612  421087 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1014 20:17:53.226724  421087 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1014 20:17:53.234132  421087 start.go:563] Will wait 60s for crictl version
	I1014 20:17:53.234203  421087 ssh_runner.go:195] Run: which crictl
	I1014 20:17:53.239069  421087 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1014 20:17:53.287126  421087 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1014 20:17:53.287244  421087 ssh_runner.go:195] Run: crio --version
	I1014 20:17:53.327479  421087 ssh_runner.go:195] Run: crio --version
	I1014 20:17:53.363073  421087 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.29.1 ...
	I1014 20:17:53.364248  421087 main.go:141] libmachine: (flannel-880673) Calling .GetIP
	I1014 20:17:53.368097  421087 main.go:141] libmachine: (flannel-880673) DBG | domain flannel-880673 has defined MAC address 52:54:00:d6:0d:31 in network mk-flannel-880673
	I1014 20:17:53.368664  421087 main.go:141] libmachine: (flannel-880673) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:0d:31", ip: ""} in network mk-flannel-880673: {Iface:virbr2 ExpiryTime:2025-10-14 21:17:46 +0000 UTC Type:0 Mac:52:54:00:d6:0d:31 Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:flannel-880673 Clientid:01:52:54:00:d6:0d:31}
	I1014 20:17:53.368691  421087 main.go:141] libmachine: (flannel-880673) DBG | domain flannel-880673 has defined IP address 192.168.39.78 and MAC address 52:54:00:d6:0d:31 in network mk-flannel-880673
	I1014 20:17:53.369001  421087 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1014 20:17:53.373983  421087 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1014 20:17:53.389567  421087 kubeadm.go:883] updating cluster {Name:flannel-880673 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34
.1 ClusterName:flannel-880673 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP:192.168.39.78 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMi
rror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1014 20:17:53.389708  421087 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1014 20:17:53.389768  421087 ssh_runner.go:195] Run: sudo crictl images --output json
	I1014 20:17:53.430110  421087 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.34.1". assuming images are not preloaded.
	I1014 20:17:53.430215  421087 ssh_runner.go:195] Run: which lz4
	I1014 20:17:53.436993  421087 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1014 20:17:53.442434  421087 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1014 20:17:53.442474  421087 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-364627/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (409477533 bytes)
	W1014 20:17:52.137421  418230 pod_ready.go:104] pod "coredns-66bc5c9577-489jr" is not "Ready", error: <nil>
	W1014 20:17:54.137620  418230 pod_ready.go:104] pod "coredns-66bc5c9577-489jr" is not "Ready", error: <nil>
	W1014 20:17:56.138484  418230 pod_ready.go:104] pod "coredns-66bc5c9577-489jr" is not "Ready", error: <nil>
	I1014 20:17:53.540027  421402 main.go:141] libmachine: (bridge-880673) waiting for domain to start...
	I1014 20:17:53.541811  421402 main.go:141] libmachine: (bridge-880673) domain is now running
	I1014 20:17:53.541838  421402 main.go:141] libmachine: (bridge-880673) waiting for IP...
	I1014 20:17:53.542767  421402 main.go:141] libmachine: (bridge-880673) DBG | domain bridge-880673 has defined MAC address 52:54:00:21:00:20 in network mk-bridge-880673
	I1014 20:17:53.543379  421402 main.go:141] libmachine: (bridge-880673) DBG | no network interface addresses found for domain bridge-880673 (source=lease)
	I1014 20:17:53.543407  421402 main.go:141] libmachine: (bridge-880673) DBG | trying to list again with source=arp
	I1014 20:17:53.543822  421402 main.go:141] libmachine: (bridge-880673) DBG | unable to find current IP address of domain bridge-880673 in network mk-bridge-880673 (interfaces detected: [])
	I1014 20:17:53.543882  421402 main.go:141] libmachine: (bridge-880673) DBG | I1014 20:17:53.543835  422003 retry.go:31] will retry after 294.647054ms: waiting for domain to come up
	I1014 20:17:53.840886  421402 main.go:141] libmachine: (bridge-880673) DBG | domain bridge-880673 has defined MAC address 52:54:00:21:00:20 in network mk-bridge-880673
	I1014 20:17:53.841778  421402 main.go:141] libmachine: (bridge-880673) DBG | no network interface addresses found for domain bridge-880673 (source=lease)
	I1014 20:17:53.841809  421402 main.go:141] libmachine: (bridge-880673) DBG | trying to list again with source=arp
	I1014 20:17:53.842292  421402 main.go:141] libmachine: (bridge-880673) DBG | unable to find current IP address of domain bridge-880673 in network mk-bridge-880673 (interfaces detected: [])
	I1014 20:17:53.842378  421402 main.go:141] libmachine: (bridge-880673) DBG | I1014 20:17:53.842274  422003 retry.go:31] will retry after 306.249634ms: waiting for domain to come up
	I1014 20:17:54.151233  421402 main.go:141] libmachine: (bridge-880673) DBG | domain bridge-880673 has defined MAC address 52:54:00:21:00:20 in network mk-bridge-880673
	I1014 20:17:54.152165  421402 main.go:141] libmachine: (bridge-880673) DBG | no network interface addresses found for domain bridge-880673 (source=lease)
	I1014 20:17:54.152200  421402 main.go:141] libmachine: (bridge-880673) DBG | trying to list again with source=arp
	I1014 20:17:54.152799  421402 main.go:141] libmachine: (bridge-880673) DBG | unable to find current IP address of domain bridge-880673 in network mk-bridge-880673 (interfaces detected: [])
	I1014 20:17:54.152831  421402 main.go:141] libmachine: (bridge-880673) DBG | I1014 20:17:54.152769  422003 retry.go:31] will retry after 428.212526ms: waiting for domain to come up
	I1014 20:17:54.582621  421402 main.go:141] libmachine: (bridge-880673) DBG | domain bridge-880673 has defined MAC address 52:54:00:21:00:20 in network mk-bridge-880673
	I1014 20:17:54.583447  421402 main.go:141] libmachine: (bridge-880673) DBG | no network interface addresses found for domain bridge-880673 (source=lease)
	I1014 20:17:54.583472  421402 main.go:141] libmachine: (bridge-880673) DBG | trying to list again with source=arp
	I1014 20:17:54.584500  421402 main.go:141] libmachine: (bridge-880673) DBG | unable to find current IP address of domain bridge-880673 in network mk-bridge-880673 (interfaces detected: [])
	I1014 20:17:54.584527  421402 main.go:141] libmachine: (bridge-880673) DBG | I1014 20:17:54.584013  422003 retry.go:31] will retry after 599.389005ms: waiting for domain to come up
	I1014 20:17:55.184701  421402 main.go:141] libmachine: (bridge-880673) DBG | domain bridge-880673 has defined MAC address 52:54:00:21:00:20 in network mk-bridge-880673
	I1014 20:17:55.185409  421402 main.go:141] libmachine: (bridge-880673) DBG | no network interface addresses found for domain bridge-880673 (source=lease)
	I1014 20:17:55.185439  421402 main.go:141] libmachine: (bridge-880673) DBG | trying to list again with source=arp
	I1014 20:17:55.185832  421402 main.go:141] libmachine: (bridge-880673) DBG | unable to find current IP address of domain bridge-880673 in network mk-bridge-880673 (interfaces detected: [])
	I1014 20:17:55.185881  421402 main.go:141] libmachine: (bridge-880673) DBG | I1014 20:17:55.185838  422003 retry.go:31] will retry after 651.000197ms: waiting for domain to come up
	I1014 20:17:55.838912  421402 main.go:141] libmachine: (bridge-880673) DBG | domain bridge-880673 has defined MAC address 52:54:00:21:00:20 in network mk-bridge-880673
	I1014 20:17:55.839716  421402 main.go:141] libmachine: (bridge-880673) DBG | no network interface addresses found for domain bridge-880673 (source=lease)
	I1014 20:17:55.839748  421402 main.go:141] libmachine: (bridge-880673) DBG | trying to list again with source=arp
	I1014 20:17:55.840211  421402 main.go:141] libmachine: (bridge-880673) DBG | unable to find current IP address of domain bridge-880673 in network mk-bridge-880673 (interfaces detected: [])
	I1014 20:17:55.840245  421402 main.go:141] libmachine: (bridge-880673) DBG | I1014 20:17:55.840177  422003 retry.go:31] will retry after 630.744356ms: waiting for domain to come up
	I1014 20:17:56.473326  421402 main.go:141] libmachine: (bridge-880673) DBG | domain bridge-880673 has defined MAC address 52:54:00:21:00:20 in network mk-bridge-880673
	I1014 20:17:56.474156  421402 main.go:141] libmachine: (bridge-880673) DBG | no network interface addresses found for domain bridge-880673 (source=lease)
	I1014 20:17:56.474185  421402 main.go:141] libmachine: (bridge-880673) DBG | trying to list again with source=arp
	I1014 20:17:56.474592  421402 main.go:141] libmachine: (bridge-880673) DBG | unable to find current IP address of domain bridge-880673 in network mk-bridge-880673 (interfaces detected: [])
	I1014 20:17:56.474662  421402 main.go:141] libmachine: (bridge-880673) DBG | I1014 20:17:56.474596  422003 retry.go:31] will retry after 941.351033ms: waiting for domain to come up
	I1014 20:17:57.417345  421402 main.go:141] libmachine: (bridge-880673) DBG | domain bridge-880673 has defined MAC address 52:54:00:21:00:20 in network mk-bridge-880673
	I1014 20:17:57.417934  421402 main.go:141] libmachine: (bridge-880673) DBG | no network interface addresses found for domain bridge-880673 (source=lease)
	I1014 20:17:57.417959  421402 main.go:141] libmachine: (bridge-880673) DBG | trying to list again with source=arp
	I1014 20:17:57.418386  421402 main.go:141] libmachine: (bridge-880673) DBG | unable to find current IP address of domain bridge-880673 in network mk-bridge-880673 (interfaces detected: [])
	I1014 20:17:57.418446  421402 main.go:141] libmachine: (bridge-880673) DBG | I1014 20:17:57.418390  422003 retry.go:31] will retry after 1.156861705s: waiting for domain to come up
	I1014 20:17:55.030710  421087 crio.go:462] duration metric: took 1.593761668s to copy over tarball
	I1014 20:17:55.030789  421087 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1014 20:17:56.814579  421087 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.783758686s)
	I1014 20:17:56.814609  421087 crio.go:469] duration metric: took 1.783864241s to extract the tarball
	I1014 20:17:56.814618  421087 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1014 20:17:56.868000  421087 ssh_runner.go:195] Run: sudo crictl images --output json
	I1014 20:17:56.915902  421087 crio.go:514] all images are preloaded for cri-o runtime.
	I1014 20:17:56.915931  421087 cache_images.go:85] Images are preloaded, skipping loading
	I1014 20:17:56.915940  421087 kubeadm.go:934] updating node { 192.168.39.78 8443 v1.34.1 crio true true} ...
	I1014 20:17:56.916066  421087 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=flannel-880673 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.78
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:flannel-880673 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel}
	I1014 20:17:56.916158  421087 ssh_runner.go:195] Run: crio config
	I1014 20:17:56.962683  421087 cni.go:84] Creating CNI manager for "flannel"
	I1014 20:17:56.962717  421087 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1014 20:17:56.962737  421087 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.78 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:flannel-880673 NodeName:flannel-880673 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.78"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.78 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/ku
bernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1014 20:17:56.962922  421087 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.78
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "flannel-880673"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.78"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.78"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1014 20:17:56.963011  421087 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1014 20:17:56.977326  421087 binaries.go:44] Found k8s binaries, skipping transfer
	I1014 20:17:56.977413  421087 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1014 20:17:56.990111  421087 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I1014 20:17:57.014665  421087 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1014 20:17:57.036328  421087 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2214 bytes)
	I1014 20:17:57.060836  421087 ssh_runner.go:195] Run: grep 192.168.39.78	control-plane.minikube.internal$ /etc/hosts
	I1014 20:17:57.065142  421087 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.78	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1014 20:17:57.080533  421087 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 20:17:57.237432  421087 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1014 20:17:57.269708  421087 certs.go:69] Setting up /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/flannel-880673 for IP: 192.168.39.78
	I1014 20:17:57.269738  421087 certs.go:195] generating shared ca certs ...
	I1014 20:17:57.269760  421087 certs.go:227] acquiring lock for ca certs: {Name:mkddeaa8fb7f14aff32554669329c3967650976a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 20:17:57.269989  421087 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21409-364627/.minikube/ca.key
	I1014 20:17:57.270059  421087 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21409-364627/.minikube/proxy-client-ca.key
	I1014 20:17:57.270074  421087 certs.go:257] generating profile certs ...
	I1014 20:17:57.270172  421087 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/flannel-880673/client.key
	I1014 20:17:57.270204  421087 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/flannel-880673/client.crt with IP's: []
	I1014 20:17:57.590880  421087 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/flannel-880673/client.crt ...
	I1014 20:17:57.590941  421087 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/flannel-880673/client.crt: {Name:mkf367293cc65dfacac82f8386e6aa77348cb48e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 20:17:57.591193  421087 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/flannel-880673/client.key ...
	I1014 20:17:57.591214  421087 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/flannel-880673/client.key: {Name:mk32041be4750a3b1dd0573fa6125b7f9b29b38d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 20:17:57.591362  421087 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/flannel-880673/apiserver.key.50d6be17
	I1014 20:17:57.591389  421087 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/flannel-880673/apiserver.crt.50d6be17 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.78]
	I1014 20:17:57.958440  421087 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/flannel-880673/apiserver.crt.50d6be17 ...
	I1014 20:17:57.958473  421087 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/flannel-880673/apiserver.crt.50d6be17: {Name:mk7a0d0e7468fc1ecb2d15a21f1efedfb729160a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 20:17:57.958647  421087 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/flannel-880673/apiserver.key.50d6be17 ...
	I1014 20:17:57.958662  421087 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/flannel-880673/apiserver.key.50d6be17: {Name:mkcd9e714b6414537d716937c3c1e66a152dc681 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 20:17:57.958737  421087 certs.go:382] copying /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/flannel-880673/apiserver.crt.50d6be17 -> /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/flannel-880673/apiserver.crt
	I1014 20:17:57.958836  421087 certs.go:386] copying /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/flannel-880673/apiserver.key.50d6be17 -> /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/flannel-880673/apiserver.key
	I1014 20:17:57.958914  421087 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/flannel-880673/proxy-client.key
	I1014 20:17:57.958934  421087 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/flannel-880673/proxy-client.crt with IP's: []
	I1014 20:17:58.348483  421087 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/flannel-880673/proxy-client.crt ...
	I1014 20:17:58.348517  421087 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/flannel-880673/proxy-client.crt: {Name:mk768d87c2d8e36cd6890fe09ebcb78d216d69e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 20:17:58.348732  421087 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/flannel-880673/proxy-client.key ...
	I1014 20:17:58.348762  421087 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/flannel-880673/proxy-client.key: {Name:mk2722cff97c505742c3f319a68d318bbcbed2e3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 20:17:58.348993  421087 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-364627/.minikube/certs/368634.pem (1338 bytes)
	W1014 20:17:58.349046  421087 certs.go:480] ignoring /home/jenkins/minikube-integration/21409-364627/.minikube/certs/368634_empty.pem, impossibly tiny 0 bytes
	I1014 20:17:58.349061  421087 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-364627/.minikube/certs/ca-key.pem (1675 bytes)
	I1014 20:17:58.349092  421087 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-364627/.minikube/certs/ca.pem (1082 bytes)
	I1014 20:17:58.349124  421087 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-364627/.minikube/certs/cert.pem (1123 bytes)
	I1014 20:17:58.349156  421087 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-364627/.minikube/certs/key.pem (1675 bytes)
	I1014 20:17:58.349211  421087 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-364627/.minikube/files/etc/ssl/certs/3686342.pem (1708 bytes)
	I1014 20:17:58.349821  421087 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-364627/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1014 20:17:58.388851  421087 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-364627/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1014 20:17:58.428916  421087 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-364627/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1014 20:17:58.460766  421087 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-364627/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1014 20:17:58.492261  421087 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/flannel-880673/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1014 20:17:58.528396  421087 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/flannel-880673/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1014 20:17:58.561053  421087 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/flannel-880673/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1014 20:17:58.593742  421087 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/flannel-880673/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1014 20:17:58.625414  421087 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-364627/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1014 20:17:58.659235  421087 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-364627/.minikube/certs/368634.pem --> /usr/share/ca-certificates/368634.pem (1338 bytes)
	I1014 20:17:58.691366  421087 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-364627/.minikube/files/etc/ssl/certs/3686342.pem --> /usr/share/ca-certificates/3686342.pem (1708 bytes)
	I1014 20:17:58.725392  421087 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1014 20:17:58.747071  421087 ssh_runner.go:195] Run: openssl version
	I1014 20:17:58.754399  421087 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1014 20:17:58.768104  421087 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1014 20:17:58.773406  421087 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 14 19:11 /usr/share/ca-certificates/minikubeCA.pem
	I1014 20:17:58.773478  421087 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1014 20:17:58.781682  421087 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1014 20:17:58.798231  421087 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/368634.pem && ln -fs /usr/share/ca-certificates/368634.pem /etc/ssl/certs/368634.pem"
	I1014 20:17:58.812733  421087 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/368634.pem
	I1014 20:17:58.819794  421087 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 14 19:18 /usr/share/ca-certificates/368634.pem
	I1014 20:17:58.819891  421087 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/368634.pem
	I1014 20:17:58.830331  421087 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/368634.pem /etc/ssl/certs/51391683.0"
	I1014 20:17:58.846865  421087 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3686342.pem && ln -fs /usr/share/ca-certificates/3686342.pem /etc/ssl/certs/3686342.pem"
	I1014 20:17:58.864053  421087 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3686342.pem
	I1014 20:17:58.871566  421087 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 14 19:18 /usr/share/ca-certificates/3686342.pem
	I1014 20:17:58.871657  421087 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3686342.pem
	I1014 20:17:58.885584  421087 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3686342.pem /etc/ssl/certs/3ec20f2e.0"
	I1014 20:17:58.904825  421087 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1014 20:17:58.912981  421087 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1014 20:17:58.913041  421087 kubeadm.go:400] StartCluster: {Name:flannel-880673 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1
ClusterName:flannel-880673 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP:192.168.39.78 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirro
r: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1014 20:17:58.913135  421087 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1014 20:17:58.913215  421087 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1014 20:17:58.965501  421087 cri.go:89] found id: ""
	I1014 20:17:58.965586  421087 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1014 20:17:58.978845  421087 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1014 20:17:58.991748  421087 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1014 20:17:59.004199  421087 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1014 20:17:59.004222  421087 kubeadm.go:157] found existing configuration files:
	
	I1014 20:17:59.004271  421087 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1014 20:17:59.016492  421087 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1014 20:17:59.016555  421087 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1014 20:17:59.029171  421087 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1014 20:17:59.041661  421087 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1014 20:17:59.041733  421087 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1014 20:17:59.056418  421087 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1014 20:17:59.068235  421087 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1014 20:17:59.068381  421087 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1014 20:17:59.081198  421087 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1014 20:17:59.093134  421087 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1014 20:17:59.093205  421087 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1014 20:17:59.105941  421087 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1014 20:17:59.264636  421087 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W1014 20:17:58.657774  418230 pod_ready.go:104] pod "coredns-66bc5c9577-489jr" is not "Ready", error: <nil>
	W1014 20:18:01.139015  418230 pod_ready.go:104] pod "coredns-66bc5c9577-489jr" is not "Ready", error: <nil>
	I1014 20:17:58.577022  421402 main.go:141] libmachine: (bridge-880673) DBG | domain bridge-880673 has defined MAC address 52:54:00:21:00:20 in network mk-bridge-880673
	I1014 20:17:58.577800  421402 main.go:141] libmachine: (bridge-880673) DBG | no network interface addresses found for domain bridge-880673 (source=lease)
	I1014 20:17:58.577825  421402 main.go:141] libmachine: (bridge-880673) DBG | trying to list again with source=arp
	I1014 20:17:58.578168  421402 main.go:141] libmachine: (bridge-880673) DBG | unable to find current IP address of domain bridge-880673 in network mk-bridge-880673 (interfaces detected: [])
	I1014 20:17:58.578194  421402 main.go:141] libmachine: (bridge-880673) DBG | I1014 20:17:58.578154  422003 retry.go:31] will retry after 1.402636054s: waiting for domain to come up
	I1014 20:17:59.982567  421402 main.go:141] libmachine: (bridge-880673) DBG | domain bridge-880673 has defined MAC address 52:54:00:21:00:20 in network mk-bridge-880673
	I1014 20:17:59.983205  421402 main.go:141] libmachine: (bridge-880673) DBG | no network interface addresses found for domain bridge-880673 (source=lease)
	I1014 20:17:59.983237  421402 main.go:141] libmachine: (bridge-880673) DBG | trying to list again with source=arp
	I1014 20:17:59.983594  421402 main.go:141] libmachine: (bridge-880673) DBG | unable to find current IP address of domain bridge-880673 in network mk-bridge-880673 (interfaces detected: [])
	I1014 20:17:59.983640  421402 main.go:141] libmachine: (bridge-880673) DBG | I1014 20:17:59.983590  422003 retry.go:31] will retry after 2.221969011s: waiting for domain to come up
	I1014 20:18:02.208011  421402 main.go:141] libmachine: (bridge-880673) DBG | domain bridge-880673 has defined MAC address 52:54:00:21:00:20 in network mk-bridge-880673
	I1014 20:18:02.209248  421402 main.go:141] libmachine: (bridge-880673) DBG | no network interface addresses found for domain bridge-880673 (source=lease)
	I1014 20:18:02.209362  421402 main.go:141] libmachine: (bridge-880673) DBG | trying to list again with source=arp
	I1014 20:18:02.209796  421402 main.go:141] libmachine: (bridge-880673) DBG | unable to find current IP address of domain bridge-880673 in network mk-bridge-880673 (interfaces detected: [])
	I1014 20:18:02.209827  421402 main.go:141] libmachine: (bridge-880673) DBG | I1014 20:18:02.209782  422003 retry.go:31] will retry after 2.101932185s: waiting for domain to come up
	W1014 20:18:03.636759  418230 pod_ready.go:104] pod "coredns-66bc5c9577-489jr" is not "Ready", error: <nil>
	W1014 20:18:05.637870  418230 pod_ready.go:104] pod "coredns-66bc5c9577-489jr" is not "Ready", error: <nil>
	I1014 20:18:04.313776  421402 main.go:141] libmachine: (bridge-880673) DBG | domain bridge-880673 has defined MAC address 52:54:00:21:00:20 in network mk-bridge-880673
	I1014 20:18:04.314632  421402 main.go:141] libmachine: (bridge-880673) DBG | no network interface addresses found for domain bridge-880673 (source=lease)
	I1014 20:18:04.314664  421402 main.go:141] libmachine: (bridge-880673) DBG | trying to list again with source=arp
	I1014 20:18:04.315124  421402 main.go:141] libmachine: (bridge-880673) DBG | unable to find current IP address of domain bridge-880673 in network mk-bridge-880673 (interfaces detected: [])
	I1014 20:18:04.315159  421402 main.go:141] libmachine: (bridge-880673) DBG | I1014 20:18:04.315054  422003 retry.go:31] will retry after 2.342959019s: waiting for domain to come up
	I1014 20:18:06.660001  421402 main.go:141] libmachine: (bridge-880673) DBG | domain bridge-880673 has defined MAC address 52:54:00:21:00:20 in network mk-bridge-880673
	I1014 20:18:06.660763  421402 main.go:141] libmachine: (bridge-880673) DBG | no network interface addresses found for domain bridge-880673 (source=lease)
	I1014 20:18:06.660792  421402 main.go:141] libmachine: (bridge-880673) DBG | trying to list again with source=arp
	I1014 20:18:06.661224  421402 main.go:141] libmachine: (bridge-880673) DBG | unable to find current IP address of domain bridge-880673 in network mk-bridge-880673 (interfaces detected: [])
	I1014 20:18:06.661254  421402 main.go:141] libmachine: (bridge-880673) DBG | I1014 20:18:06.661182  422003 retry.go:31] will retry after 3.64841419s: waiting for domain to come up
	I1014 20:18:11.536374  421087 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1014 20:18:11.536506  421087 kubeadm.go:318] [preflight] Running pre-flight checks
	I1014 20:18:11.536652  421087 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1014 20:18:11.536782  421087 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1014 20:18:11.536904  421087 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1014 20:18:11.536982  421087 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1014 20:18:11.538339  421087 out.go:252]   - Generating certificates and keys ...
	I1014 20:18:11.538437  421087 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1014 20:18:11.538511  421087 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1014 20:18:11.538632  421087 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1014 20:18:11.538736  421087 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1014 20:18:11.538828  421087 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1014 20:18:11.538899  421087 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1014 20:18:11.538991  421087 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1014 20:18:11.539177  421087 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [flannel-880673 localhost] and IPs [192.168.39.78 127.0.0.1 ::1]
	I1014 20:18:11.539273  421087 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1014 20:18:11.539461  421087 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [flannel-880673 localhost] and IPs [192.168.39.78 127.0.0.1 ::1]
	I1014 20:18:11.539551  421087 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1014 20:18:11.539647  421087 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1014 20:18:11.539718  421087 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1014 20:18:11.539793  421087 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1014 20:18:11.539860  421087 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1014 20:18:11.539948  421087 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1014 20:18:11.540034  421087 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1014 20:18:11.540120  421087 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1014 20:18:11.540199  421087 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1014 20:18:11.540345  421087 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1014 20:18:11.540460  421087 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1014 20:18:11.542094  421087 out.go:252]   - Booting up control plane ...
	I1014 20:18:11.542205  421087 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1014 20:18:11.542352  421087 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1014 20:18:11.542478  421087 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1014 20:18:11.542648  421087 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1014 20:18:11.542786  421087 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1014 20:18:11.542936  421087 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1014 20:18:11.543072  421087 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1014 20:18:11.543132  421087 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1014 20:18:11.543328  421087 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1014 20:18:11.543489  421087 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1014 20:18:11.543572  421087 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.001234355s
	I1014 20:18:11.543691  421087 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1014 20:18:11.543814  421087 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.39.78:8443/livez
	I1014 20:18:11.543944  421087 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1014 20:18:11.544060  421087 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1014 20:18:11.544183  421087 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 3.076803108s
	I1014 20:18:11.544288  421087 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 4.033725081s
	I1014 20:18:11.544397  421087 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 6.001911853s
	I1014 20:18:11.544548  421087 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1014 20:18:11.544745  421087 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1014 20:18:11.544849  421087 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1014 20:18:11.545113  421087 kubeadm.go:318] [mark-control-plane] Marking the node flannel-880673 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1014 20:18:11.545193  421087 kubeadm.go:318] [bootstrap-token] Using token: mb1gep.qj5bz6jgot4fwn77
	I1014 20:18:11.547493  421087 out.go:252]   - Configuring RBAC rules ...
	I1014 20:18:11.547615  421087 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1014 20:18:11.547742  421087 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1014 20:18:11.547955  421087 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1014 20:18:11.548135  421087 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1014 20:18:11.548274  421087 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1014 20:18:11.548423  421087 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1014 20:18:11.548592  421087 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1014 20:18:11.548666  421087 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1014 20:18:11.548750  421087 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1014 20:18:11.548772  421087 kubeadm.go:318] 
	I1014 20:18:11.548854  421087 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1014 20:18:11.548868  421087 kubeadm.go:318] 
	I1014 20:18:11.548957  421087 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1014 20:18:11.548969  421087 kubeadm.go:318] 
	I1014 20:18:11.549017  421087 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1014 20:18:11.549103  421087 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1014 20:18:11.549161  421087 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1014 20:18:11.549173  421087 kubeadm.go:318] 
	I1014 20:18:11.549239  421087 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1014 20:18:11.549250  421087 kubeadm.go:318] 
	I1014 20:18:11.549352  421087 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1014 20:18:11.549378  421087 kubeadm.go:318] 
	I1014 20:18:11.549446  421087 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1014 20:18:11.549569  421087 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1014 20:18:11.549669  421087 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1014 20:18:11.549678  421087 kubeadm.go:318] 
	I1014 20:18:11.549781  421087 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1014 20:18:11.549879  421087 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1014 20:18:11.549892  421087 kubeadm.go:318] 
	I1014 20:18:11.549998  421087 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token mb1gep.qj5bz6jgot4fwn77 \
	I1014 20:18:11.550130  421087 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:a62c4f11982899337eae6d2ac06abdf2a9e3ffb256514381b9172598c708cb1d \
	I1014 20:18:11.550176  421087 kubeadm.go:318] 	--control-plane 
	I1014 20:18:11.550185  421087 kubeadm.go:318] 
	I1014 20:18:11.550261  421087 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1014 20:18:11.550268  421087 kubeadm.go:318] 
	I1014 20:18:11.550378  421087 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token mb1gep.qj5bz6jgot4fwn77 \
	I1014 20:18:11.550496  421087 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:a62c4f11982899337eae6d2ac06abdf2a9e3ffb256514381b9172598c708cb1d 
	I1014 20:18:11.550521  421087 cni.go:84] Creating CNI manager for "flannel"
	I1014 20:18:11.552888  421087 out.go:179] * Configuring Flannel (Container Networking Interface) ...
	W1014 20:18:08.137637  418230 pod_ready.go:104] pod "coredns-66bc5c9577-489jr" is not "Ready", error: <nil>
	W1014 20:18:10.138153  418230 pod_ready.go:104] pod "coredns-66bc5c9577-489jr" is not "Ready", error: <nil>
	I1014 20:18:10.313428  421402 main.go:141] libmachine: (bridge-880673) DBG | domain bridge-880673 has defined MAC address 52:54:00:21:00:20 in network mk-bridge-880673
	I1014 20:18:10.314180  421402 main.go:141] libmachine: (bridge-880673) found domain IP: 192.168.61.105
	I1014 20:18:10.314203  421402 main.go:141] libmachine: (bridge-880673) reserving static IP address...
	I1014 20:18:10.314217  421402 main.go:141] libmachine: (bridge-880673) DBG | domain bridge-880673 has current primary IP address 192.168.61.105 and MAC address 52:54:00:21:00:20 in network mk-bridge-880673
	I1014 20:18:10.314677  421402 main.go:141] libmachine: (bridge-880673) DBG | unable to find host DHCP lease matching {name: "bridge-880673", mac: "52:54:00:21:00:20", ip: "192.168.61.105"} in network mk-bridge-880673
	I1014 20:18:10.548894  421402 main.go:141] libmachine: (bridge-880673) DBG | Getting to WaitForSSH function...
	I1014 20:18:10.548930  421402 main.go:141] libmachine: (bridge-880673) reserved static IP address 192.168.61.105 for domain bridge-880673
	I1014 20:18:10.548965  421402 main.go:141] libmachine: (bridge-880673) waiting for SSH...
	I1014 20:18:10.552436  421402 main.go:141] libmachine: (bridge-880673) DBG | domain bridge-880673 has defined MAC address 52:54:00:21:00:20 in network mk-bridge-880673
	I1014 20:18:10.552981  421402 main.go:141] libmachine: (bridge-880673) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:00:20", ip: ""} in network mk-bridge-880673: {Iface:virbr3 ExpiryTime:2025-10-14 21:18:07 +0000 UTC Type:0 Mac:52:54:00:21:00:20 Iaid: IPaddr:192.168.61.105 Prefix:24 Hostname:minikube Clientid:01:52:54:00:21:00:20}
	I1014 20:18:10.553012  421402 main.go:141] libmachine: (bridge-880673) DBG | domain bridge-880673 has defined IP address 192.168.61.105 and MAC address 52:54:00:21:00:20 in network mk-bridge-880673
	I1014 20:18:10.553268  421402 main.go:141] libmachine: (bridge-880673) DBG | Using SSH client type: external
	I1014 20:18:10.553294  421402 main.go:141] libmachine: (bridge-880673) DBG | Using SSH private key: /home/jenkins/minikube-integration/21409-364627/.minikube/machines/bridge-880673/id_rsa (-rw-------)
	I1014 20:18:10.553354  421402 main.go:141] libmachine: (bridge-880673) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.105 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/21409-364627/.minikube/machines/bridge-880673/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1014 20:18:10.553373  421402 main.go:141] libmachine: (bridge-880673) DBG | About to run SSH command:
	I1014 20:18:10.553390  421402 main.go:141] libmachine: (bridge-880673) DBG | exit 0
	I1014 20:18:10.685704  421402 main.go:141] libmachine: (bridge-880673) DBG | SSH cmd err, output: <nil>: 
	I1014 20:18:10.686022  421402 main.go:141] libmachine: (bridge-880673) domain creation complete
	I1014 20:18:10.686447  421402 main.go:141] libmachine: (bridge-880673) Calling .GetConfigRaw
	I1014 20:18:10.687088  421402 main.go:141] libmachine: (bridge-880673) Calling .DriverName
	I1014 20:18:10.687355  421402 main.go:141] libmachine: (bridge-880673) Calling .DriverName
	I1014 20:18:10.687542  421402 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1014 20:18:10.687560  421402 main.go:141] libmachine: (bridge-880673) Calling .GetState
	I1014 20:18:10.689236  421402 main.go:141] libmachine: Detecting operating system of created instance...
	I1014 20:18:10.689253  421402 main.go:141] libmachine: Waiting for SSH to be available...
	I1014 20:18:10.689261  421402 main.go:141] libmachine: Getting to WaitForSSH function...
	I1014 20:18:10.689269  421402 main.go:141] libmachine: (bridge-880673) Calling .GetSSHHostname
	I1014 20:18:10.692164  421402 main.go:141] libmachine: (bridge-880673) DBG | domain bridge-880673 has defined MAC address 52:54:00:21:00:20 in network mk-bridge-880673
	I1014 20:18:10.692638  421402 main.go:141] libmachine: (bridge-880673) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:00:20", ip: ""} in network mk-bridge-880673: {Iface:virbr3 ExpiryTime:2025-10-14 21:18:07 +0000 UTC Type:0 Mac:52:54:00:21:00:20 Iaid: IPaddr:192.168.61.105 Prefix:24 Hostname:bridge-880673 Clientid:01:52:54:00:21:00:20}
	I1014 20:18:10.692667  421402 main.go:141] libmachine: (bridge-880673) DBG | domain bridge-880673 has defined IP address 192.168.61.105 and MAC address 52:54:00:21:00:20 in network mk-bridge-880673
	I1014 20:18:10.692889  421402 main.go:141] libmachine: (bridge-880673) Calling .GetSSHPort
	I1014 20:18:10.693092  421402 main.go:141] libmachine: (bridge-880673) Calling .GetSSHKeyPath
	I1014 20:18:10.693260  421402 main.go:141] libmachine: (bridge-880673) Calling .GetSSHKeyPath
	I1014 20:18:10.693451  421402 main.go:141] libmachine: (bridge-880673) Calling .GetSSHUsername
	I1014 20:18:10.693655  421402 main.go:141] libmachine: Using SSH client type: native
	I1014 20:18:10.693975  421402 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.61.105 22 <nil> <nil>}
	I1014 20:18:10.693994  421402 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1014 20:18:10.800086  421402 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1014 20:18:10.800112  421402 main.go:141] libmachine: Detecting the provisioner...
	I1014 20:18:10.800125  421402 main.go:141] libmachine: (bridge-880673) Calling .GetSSHHostname
	I1014 20:18:10.805064  421402 main.go:141] libmachine: (bridge-880673) DBG | domain bridge-880673 has defined MAC address 52:54:00:21:00:20 in network mk-bridge-880673
	I1014 20:18:10.805804  421402 main.go:141] libmachine: (bridge-880673) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:00:20", ip: ""} in network mk-bridge-880673: {Iface:virbr3 ExpiryTime:2025-10-14 21:18:07 +0000 UTC Type:0 Mac:52:54:00:21:00:20 Iaid: IPaddr:192.168.61.105 Prefix:24 Hostname:bridge-880673 Clientid:01:52:54:00:21:00:20}
	I1014 20:18:10.805841  421402 main.go:141] libmachine: (bridge-880673) DBG | domain bridge-880673 has defined IP address 192.168.61.105 and MAC address 52:54:00:21:00:20 in network mk-bridge-880673
	I1014 20:18:10.806339  421402 main.go:141] libmachine: (bridge-880673) Calling .GetSSHPort
	I1014 20:18:10.806645  421402 main.go:141] libmachine: (bridge-880673) Calling .GetSSHKeyPath
	I1014 20:18:10.806900  421402 main.go:141] libmachine: (bridge-880673) Calling .GetSSHKeyPath
	I1014 20:18:10.807108  421402 main.go:141] libmachine: (bridge-880673) Calling .GetSSHUsername
	I1014 20:18:10.807424  421402 main.go:141] libmachine: Using SSH client type: native
	I1014 20:18:10.807729  421402 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.61.105 22 <nil> <nil>}
	I1014 20:18:10.807746  421402 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1014 20:18:10.924938  421402 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2025.02-dirty
	ID=buildroot
	VERSION_ID=2025.02
	PRETTY_NAME="Buildroot 2025.02"
	
	I1014 20:18:10.925034  421402 main.go:141] libmachine: found compatible host: buildroot
	I1014 20:18:10.925048  421402 main.go:141] libmachine: Provisioning with buildroot...
	I1014 20:18:10.925060  421402 main.go:141] libmachine: (bridge-880673) Calling .GetMachineName
	I1014 20:18:10.925444  421402 buildroot.go:166] provisioning hostname "bridge-880673"
	I1014 20:18:10.925485  421402 main.go:141] libmachine: (bridge-880673) Calling .GetMachineName
	I1014 20:18:10.925766  421402 main.go:141] libmachine: (bridge-880673) Calling .GetSSHHostname
	I1014 20:18:10.929615  421402 main.go:141] libmachine: (bridge-880673) DBG | domain bridge-880673 has defined MAC address 52:54:00:21:00:20 in network mk-bridge-880673
	I1014 20:18:10.930124  421402 main.go:141] libmachine: (bridge-880673) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:00:20", ip: ""} in network mk-bridge-880673: {Iface:virbr3 ExpiryTime:2025-10-14 21:18:07 +0000 UTC Type:0 Mac:52:54:00:21:00:20 Iaid: IPaddr:192.168.61.105 Prefix:24 Hostname:bridge-880673 Clientid:01:52:54:00:21:00:20}
	I1014 20:18:10.930176  421402 main.go:141] libmachine: (bridge-880673) DBG | domain bridge-880673 has defined IP address 192.168.61.105 and MAC address 52:54:00:21:00:20 in network mk-bridge-880673
	I1014 20:18:10.930503  421402 main.go:141] libmachine: (bridge-880673) Calling .GetSSHPort
	I1014 20:18:10.930771  421402 main.go:141] libmachine: (bridge-880673) Calling .GetSSHKeyPath
	I1014 20:18:10.930988  421402 main.go:141] libmachine: (bridge-880673) Calling .GetSSHKeyPath
	I1014 20:18:10.931168  421402 main.go:141] libmachine: (bridge-880673) Calling .GetSSHUsername
	I1014 20:18:10.931376  421402 main.go:141] libmachine: Using SSH client type: native
	I1014 20:18:10.931687  421402 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.61.105 22 <nil> <nil>}
	I1014 20:18:10.931711  421402 main.go:141] libmachine: About to run SSH command:
	sudo hostname bridge-880673 && echo "bridge-880673" | sudo tee /etc/hostname
	I1014 20:18:11.067523  421402 main.go:141] libmachine: SSH cmd err, output: <nil>: bridge-880673
	
	I1014 20:18:11.067572  421402 main.go:141] libmachine: (bridge-880673) Calling .GetSSHHostname
	I1014 20:18:11.072145  421402 main.go:141] libmachine: (bridge-880673) DBG | domain bridge-880673 has defined MAC address 52:54:00:21:00:20 in network mk-bridge-880673
	I1014 20:18:11.072622  421402 main.go:141] libmachine: (bridge-880673) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:00:20", ip: ""} in network mk-bridge-880673: {Iface:virbr3 ExpiryTime:2025-10-14 21:18:07 +0000 UTC Type:0 Mac:52:54:00:21:00:20 Iaid: IPaddr:192.168.61.105 Prefix:24 Hostname:bridge-880673 Clientid:01:52:54:00:21:00:20}
	I1014 20:18:11.072658  421402 main.go:141] libmachine: (bridge-880673) DBG | domain bridge-880673 has defined IP address 192.168.61.105 and MAC address 52:54:00:21:00:20 in network mk-bridge-880673
	I1014 20:18:11.072970  421402 main.go:141] libmachine: (bridge-880673) Calling .GetSSHPort
	I1014 20:18:11.073270  421402 main.go:141] libmachine: (bridge-880673) Calling .GetSSHKeyPath
	I1014 20:18:11.073503  421402 main.go:141] libmachine: (bridge-880673) Calling .GetSSHKeyPath
	I1014 20:18:11.073722  421402 main.go:141] libmachine: (bridge-880673) Calling .GetSSHUsername
	I1014 20:18:11.073955  421402 main.go:141] libmachine: Using SSH client type: native
	I1014 20:18:11.074245  421402 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.61.105 22 <nil> <nil>}
	I1014 20:18:11.074276  421402 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sbridge-880673' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 bridge-880673/g' /etc/hosts;
				else 
					echo '127.0.1.1 bridge-880673' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1014 20:18:11.198456  421402 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1014 20:18:11.198494  421402 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21409-364627/.minikube CaCertPath:/home/jenkins/minikube-integration/21409-364627/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21409-364627/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21409-364627/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21409-364627/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21409-364627/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21409-364627/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21409-364627/.minikube}
	I1014 20:18:11.198543  421402 buildroot.go:174] setting up certificates
	I1014 20:18:11.198557  421402 provision.go:84] configureAuth start
	I1014 20:18:11.198577  421402 main.go:141] libmachine: (bridge-880673) Calling .GetMachineName
	I1014 20:18:11.198927  421402 main.go:141] libmachine: (bridge-880673) Calling .GetIP
	I1014 20:18:11.202802  421402 main.go:141] libmachine: (bridge-880673) DBG | domain bridge-880673 has defined MAC address 52:54:00:21:00:20 in network mk-bridge-880673
	I1014 20:18:11.203150  421402 main.go:141] libmachine: (bridge-880673) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:00:20", ip: ""} in network mk-bridge-880673: {Iface:virbr3 ExpiryTime:2025-10-14 21:18:07 +0000 UTC Type:0 Mac:52:54:00:21:00:20 Iaid: IPaddr:192.168.61.105 Prefix:24 Hostname:bridge-880673 Clientid:01:52:54:00:21:00:20}
	I1014 20:18:11.203189  421402 main.go:141] libmachine: (bridge-880673) DBG | domain bridge-880673 has defined IP address 192.168.61.105 and MAC address 52:54:00:21:00:20 in network mk-bridge-880673
	I1014 20:18:11.203382  421402 main.go:141] libmachine: (bridge-880673) Calling .GetSSHHostname
	I1014 20:18:11.207637  421402 main.go:141] libmachine: (bridge-880673) DBG | domain bridge-880673 has defined MAC address 52:54:00:21:00:20 in network mk-bridge-880673
	I1014 20:18:11.208132  421402 main.go:141] libmachine: (bridge-880673) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:00:20", ip: ""} in network mk-bridge-880673: {Iface:virbr3 ExpiryTime:2025-10-14 21:18:07 +0000 UTC Type:0 Mac:52:54:00:21:00:20 Iaid: IPaddr:192.168.61.105 Prefix:24 Hostname:bridge-880673 Clientid:01:52:54:00:21:00:20}
	I1014 20:18:11.208159  421402 main.go:141] libmachine: (bridge-880673) DBG | domain bridge-880673 has defined IP address 192.168.61.105 and MAC address 52:54:00:21:00:20 in network mk-bridge-880673
	I1014 20:18:11.208374  421402 provision.go:143] copyHostCerts
	I1014 20:18:11.208450  421402 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-364627/.minikube/ca.pem, removing ...
	I1014 20:18:11.208480  421402 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-364627/.minikube/ca.pem
	I1014 20:18:11.208587  421402 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-364627/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21409-364627/.minikube/ca.pem (1082 bytes)
	I1014 20:18:11.208749  421402 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-364627/.minikube/cert.pem, removing ...
	I1014 20:18:11.208768  421402 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-364627/.minikube/cert.pem
	I1014 20:18:11.208818  421402 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-364627/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21409-364627/.minikube/cert.pem (1123 bytes)
	I1014 20:18:11.208923  421402 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-364627/.minikube/key.pem, removing ...
	I1014 20:18:11.208942  421402 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-364627/.minikube/key.pem
	I1014 20:18:11.208982  421402 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-364627/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21409-364627/.minikube/key.pem (1675 bytes)
	I1014 20:18:11.209070  421402 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21409-364627/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21409-364627/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21409-364627/.minikube/certs/ca-key.pem org=jenkins.bridge-880673 san=[127.0.0.1 192.168.61.105 bridge-880673 localhost minikube]
	I1014 20:18:11.337710  421402 provision.go:177] copyRemoteCerts
	I1014 20:18:11.337789  421402 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1014 20:18:11.337818  421402 main.go:141] libmachine: (bridge-880673) Calling .GetSSHHostname
	I1014 20:18:11.340906  421402 main.go:141] libmachine: (bridge-880673) DBG | domain bridge-880673 has defined MAC address 52:54:00:21:00:20 in network mk-bridge-880673
	I1014 20:18:11.341359  421402 main.go:141] libmachine: (bridge-880673) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:00:20", ip: ""} in network mk-bridge-880673: {Iface:virbr3 ExpiryTime:2025-10-14 21:18:07 +0000 UTC Type:0 Mac:52:54:00:21:00:20 Iaid: IPaddr:192.168.61.105 Prefix:24 Hostname:bridge-880673 Clientid:01:52:54:00:21:00:20}
	I1014 20:18:11.341393  421402 main.go:141] libmachine: (bridge-880673) DBG | domain bridge-880673 has defined IP address 192.168.61.105 and MAC address 52:54:00:21:00:20 in network mk-bridge-880673
	I1014 20:18:11.341577  421402 main.go:141] libmachine: (bridge-880673) Calling .GetSSHPort
	I1014 20:18:11.341801  421402 main.go:141] libmachine: (bridge-880673) Calling .GetSSHKeyPath
	I1014 20:18:11.341949  421402 main.go:141] libmachine: (bridge-880673) Calling .GetSSHUsername
	I1014 20:18:11.342068  421402 sshutil.go:53] new ssh client: &{IP:192.168.61.105 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21409-364627/.minikube/machines/bridge-880673/id_rsa Username:docker}
	I1014 20:18:11.426929  421402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-364627/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1014 20:18:11.460762  421402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-364627/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1014 20:18:11.493090  421402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-364627/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1014 20:18:11.527196  421402 provision.go:87] duration metric: took 328.61751ms to configureAuth
	I1014 20:18:11.527242  421402 buildroot.go:189] setting minikube options for container-runtime
	I1014 20:18:11.527513  421402 config.go:182] Loaded profile config "bridge-880673": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1014 20:18:11.527698  421402 main.go:141] libmachine: (bridge-880673) Calling .GetSSHHostname
	I1014 20:18:11.531881  421402 main.go:141] libmachine: (bridge-880673) DBG | domain bridge-880673 has defined MAC address 52:54:00:21:00:20 in network mk-bridge-880673
	I1014 20:18:11.532435  421402 main.go:141] libmachine: (bridge-880673) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:00:20", ip: ""} in network mk-bridge-880673: {Iface:virbr3 ExpiryTime:2025-10-14 21:18:07 +0000 UTC Type:0 Mac:52:54:00:21:00:20 Iaid: IPaddr:192.168.61.105 Prefix:24 Hostname:bridge-880673 Clientid:01:52:54:00:21:00:20}
	I1014 20:18:11.532475  421402 main.go:141] libmachine: (bridge-880673) DBG | domain bridge-880673 has defined IP address 192.168.61.105 and MAC address 52:54:00:21:00:20 in network mk-bridge-880673
	I1014 20:18:11.532855  421402 main.go:141] libmachine: (bridge-880673) Calling .GetSSHPort
	I1014 20:18:11.533121  421402 main.go:141] libmachine: (bridge-880673) Calling .GetSSHKeyPath
	I1014 20:18:11.533363  421402 main.go:141] libmachine: (bridge-880673) Calling .GetSSHKeyPath
	I1014 20:18:11.533562  421402 main.go:141] libmachine: (bridge-880673) Calling .GetSSHUsername
	I1014 20:18:11.533772  421402 main.go:141] libmachine: Using SSH client type: native
	I1014 20:18:11.534056  421402 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.61.105 22 <nil> <nil>}
	I1014 20:18:11.534080  421402 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1014 20:18:11.810259  421402 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1014 20:18:11.810308  421402 main.go:141] libmachine: Checking connection to Docker...
	I1014 20:18:11.810336  421402 main.go:141] libmachine: (bridge-880673) Calling .GetURL
	I1014 20:18:11.812149  421402 main.go:141] libmachine: (bridge-880673) DBG | using libvirt version 8000000
	I1014 20:18:11.815234  421402 main.go:141] libmachine: (bridge-880673) DBG | domain bridge-880673 has defined MAC address 52:54:00:21:00:20 in network mk-bridge-880673
	I1014 20:18:11.815595  421402 main.go:141] libmachine: (bridge-880673) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:00:20", ip: ""} in network mk-bridge-880673: {Iface:virbr3 ExpiryTime:2025-10-14 21:18:07 +0000 UTC Type:0 Mac:52:54:00:21:00:20 Iaid: IPaddr:192.168.61.105 Prefix:24 Hostname:bridge-880673 Clientid:01:52:54:00:21:00:20}
	I1014 20:18:11.815640  421402 main.go:141] libmachine: (bridge-880673) DBG | domain bridge-880673 has defined IP address 192.168.61.105 and MAC address 52:54:00:21:00:20 in network mk-bridge-880673
	I1014 20:18:11.815928  421402 main.go:141] libmachine: Docker is up and running!
	I1014 20:18:11.815951  421402 main.go:141] libmachine: Reticulating splines...
	I1014 20:18:11.815960  421402 client.go:171] duration metric: took 20.176929121s to LocalClient.Create
	I1014 20:18:11.815991  421402 start.go:167] duration metric: took 20.177016841s to libmachine.API.Create "bridge-880673"
	I1014 20:18:11.816003  421402 start.go:293] postStartSetup for "bridge-880673" (driver="kvm2")
	I1014 20:18:11.816014  421402 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1014 20:18:11.816042  421402 main.go:141] libmachine: (bridge-880673) Calling .DriverName
	I1014 20:18:11.816326  421402 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1014 20:18:11.816366  421402 main.go:141] libmachine: (bridge-880673) Calling .GetSSHHostname
	I1014 20:18:11.819358  421402 main.go:141] libmachine: (bridge-880673) DBG | domain bridge-880673 has defined MAC address 52:54:00:21:00:20 in network mk-bridge-880673
	I1014 20:18:11.819831  421402 main.go:141] libmachine: (bridge-880673) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:00:20", ip: ""} in network mk-bridge-880673: {Iface:virbr3 ExpiryTime:2025-10-14 21:18:07 +0000 UTC Type:0 Mac:52:54:00:21:00:20 Iaid: IPaddr:192.168.61.105 Prefix:24 Hostname:bridge-880673 Clientid:01:52:54:00:21:00:20}
	I1014 20:18:11.819858  421402 main.go:141] libmachine: (bridge-880673) DBG | domain bridge-880673 has defined IP address 192.168.61.105 and MAC address 52:54:00:21:00:20 in network mk-bridge-880673
	I1014 20:18:11.820144  421402 main.go:141] libmachine: (bridge-880673) Calling .GetSSHPort
	I1014 20:18:11.820458  421402 main.go:141] libmachine: (bridge-880673) Calling .GetSSHKeyPath
	I1014 20:18:11.820707  421402 main.go:141] libmachine: (bridge-880673) Calling .GetSSHUsername
	I1014 20:18:11.820915  421402 sshutil.go:53] new ssh client: &{IP:192.168.61.105 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21409-364627/.minikube/machines/bridge-880673/id_rsa Username:docker}
	I1014 20:18:11.907189  421402 ssh_runner.go:195] Run: cat /etc/os-release
	I1014 20:18:11.912534  421402 info.go:137] Remote host: Buildroot 2025.02
	I1014 20:18:11.912577  421402 filesync.go:126] Scanning /home/jenkins/minikube-integration/21409-364627/.minikube/addons for local assets ...
	I1014 20:18:11.912663  421402 filesync.go:126] Scanning /home/jenkins/minikube-integration/21409-364627/.minikube/files for local assets ...
	I1014 20:18:11.912778  421402 filesync.go:149] local asset: /home/jenkins/minikube-integration/21409-364627/.minikube/files/etc/ssl/certs/3686342.pem -> 3686342.pem in /etc/ssl/certs
	I1014 20:18:11.912956  421402 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1014 20:18:11.929717  421402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-364627/.minikube/files/etc/ssl/certs/3686342.pem --> /etc/ssl/certs/3686342.pem (1708 bytes)
	I1014 20:18:11.964912  421402 start.go:296] duration metric: took 148.891523ms for postStartSetup
	I1014 20:18:11.964972  421402 main.go:141] libmachine: (bridge-880673) Calling .GetConfigRaw
	I1014 20:18:11.965780  421402 main.go:141] libmachine: (bridge-880673) Calling .GetIP
	I1014 20:18:11.969258  421402 main.go:141] libmachine: (bridge-880673) DBG | domain bridge-880673 has defined MAC address 52:54:00:21:00:20 in network mk-bridge-880673
	I1014 20:18:11.969713  421402 main.go:141] libmachine: (bridge-880673) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:00:20", ip: ""} in network mk-bridge-880673: {Iface:virbr3 ExpiryTime:2025-10-14 21:18:07 +0000 UTC Type:0 Mac:52:54:00:21:00:20 Iaid: IPaddr:192.168.61.105 Prefix:24 Hostname:bridge-880673 Clientid:01:52:54:00:21:00:20}
	I1014 20:18:11.969742  421402 main.go:141] libmachine: (bridge-880673) DBG | domain bridge-880673 has defined IP address 192.168.61.105 and MAC address 52:54:00:21:00:20 in network mk-bridge-880673
	I1014 20:18:11.970017  421402 profile.go:143] Saving config to /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/bridge-880673/config.json ...
	I1014 20:18:11.970236  421402 start.go:128] duration metric: took 20.355729631s to createHost
	I1014 20:18:11.970263  421402 main.go:141] libmachine: (bridge-880673) Calling .GetSSHHostname
	I1014 20:18:11.973266  421402 main.go:141] libmachine: (bridge-880673) DBG | domain bridge-880673 has defined MAC address 52:54:00:21:00:20 in network mk-bridge-880673
	I1014 20:18:11.973695  421402 main.go:141] libmachine: (bridge-880673) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:00:20", ip: ""} in network mk-bridge-880673: {Iface:virbr3 ExpiryTime:2025-10-14 21:18:07 +0000 UTC Type:0 Mac:52:54:00:21:00:20 Iaid: IPaddr:192.168.61.105 Prefix:24 Hostname:bridge-880673 Clientid:01:52:54:00:21:00:20}
	I1014 20:18:11.973729  421402 main.go:141] libmachine: (bridge-880673) DBG | domain bridge-880673 has defined IP address 192.168.61.105 and MAC address 52:54:00:21:00:20 in network mk-bridge-880673
	I1014 20:18:11.973951  421402 main.go:141] libmachine: (bridge-880673) Calling .GetSSHPort
	I1014 20:18:11.974158  421402 main.go:141] libmachine: (bridge-880673) Calling .GetSSHKeyPath
	I1014 20:18:11.974374  421402 main.go:141] libmachine: (bridge-880673) Calling .GetSSHKeyPath
	I1014 20:18:11.974517  421402 main.go:141] libmachine: (bridge-880673) Calling .GetSSHUsername
	I1014 20:18:11.974689  421402 main.go:141] libmachine: Using SSH client type: native
	I1014 20:18:11.975021  421402 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.61.105 22 <nil> <nil>}
	I1014 20:18:11.975038  421402 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1014 20:18:12.083424  421402 main.go:141] libmachine: SSH cmd err, output: <nil>: 1760473092.051952819
	
	I1014 20:18:12.083453  421402 fix.go:216] guest clock: 1760473092.051952819
	I1014 20:18:12.083464  421402 fix.go:229] Guest: 2025-10-14 20:18:12.051952819 +0000 UTC Remote: 2025-10-14 20:18:11.970250125 +0000 UTC m=+39.025245163 (delta=81.702694ms)
	I1014 20:18:12.083494  421402 fix.go:200] guest clock delta is within tolerance: 81.702694ms
	I1014 20:18:12.083512  421402 start.go:83] releasing machines lock for "bridge-880673", held for 20.469208293s
	I1014 20:18:12.083543  421402 main.go:141] libmachine: (bridge-880673) Calling .DriverName
	I1014 20:18:12.083972  421402 main.go:141] libmachine: (bridge-880673) Calling .GetIP
	I1014 20:18:12.087662  421402 main.go:141] libmachine: (bridge-880673) DBG | domain bridge-880673 has defined MAC address 52:54:00:21:00:20 in network mk-bridge-880673
	I1014 20:18:12.088178  421402 main.go:141] libmachine: (bridge-880673) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:00:20", ip: ""} in network mk-bridge-880673: {Iface:virbr3 ExpiryTime:2025-10-14 21:18:07 +0000 UTC Type:0 Mac:52:54:00:21:00:20 Iaid: IPaddr:192.168.61.105 Prefix:24 Hostname:bridge-880673 Clientid:01:52:54:00:21:00:20}
	I1014 20:18:12.088210  421402 main.go:141] libmachine: (bridge-880673) DBG | domain bridge-880673 has defined IP address 192.168.61.105 and MAC address 52:54:00:21:00:20 in network mk-bridge-880673
	I1014 20:18:12.088501  421402 main.go:141] libmachine: (bridge-880673) Calling .DriverName
	I1014 20:18:12.089069  421402 main.go:141] libmachine: (bridge-880673) Calling .DriverName
	I1014 20:18:12.089284  421402 main.go:141] libmachine: (bridge-880673) Calling .DriverName
	I1014 20:18:12.089444  421402 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1014 20:18:12.089492  421402 main.go:141] libmachine: (bridge-880673) Calling .GetSSHHostname
	I1014 20:18:12.089796  421402 ssh_runner.go:195] Run: cat /version.json
	I1014 20:18:12.089822  421402 main.go:141] libmachine: (bridge-880673) Calling .GetSSHHostname
	I1014 20:18:12.093687  421402 main.go:141] libmachine: (bridge-880673) DBG | domain bridge-880673 has defined MAC address 52:54:00:21:00:20 in network mk-bridge-880673
	I1014 20:18:12.093934  421402 main.go:141] libmachine: (bridge-880673) DBG | domain bridge-880673 has defined MAC address 52:54:00:21:00:20 in network mk-bridge-880673
	I1014 20:18:12.094119  421402 main.go:141] libmachine: (bridge-880673) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:00:20", ip: ""} in network mk-bridge-880673: {Iface:virbr3 ExpiryTime:2025-10-14 21:18:07 +0000 UTC Type:0 Mac:52:54:00:21:00:20 Iaid: IPaddr:192.168.61.105 Prefix:24 Hostname:bridge-880673 Clientid:01:52:54:00:21:00:20}
	I1014 20:18:12.094151  421402 main.go:141] libmachine: (bridge-880673) DBG | domain bridge-880673 has defined IP address 192.168.61.105 and MAC address 52:54:00:21:00:20 in network mk-bridge-880673
	I1014 20:18:12.094397  421402 main.go:141] libmachine: (bridge-880673) Calling .GetSSHPort
	I1014 20:18:12.094530  421402 main.go:141] libmachine: (bridge-880673) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:00:20", ip: ""} in network mk-bridge-880673: {Iface:virbr3 ExpiryTime:2025-10-14 21:18:07 +0000 UTC Type:0 Mac:52:54:00:21:00:20 Iaid: IPaddr:192.168.61.105 Prefix:24 Hostname:bridge-880673 Clientid:01:52:54:00:21:00:20}
	I1014 20:18:12.094555  421402 main.go:141] libmachine: (bridge-880673) DBG | domain bridge-880673 has defined IP address 192.168.61.105 and MAC address 52:54:00:21:00:20 in network mk-bridge-880673
	I1014 20:18:12.094628  421402 main.go:141] libmachine: (bridge-880673) Calling .GetSSHKeyPath
	I1014 20:18:12.094829  421402 main.go:141] libmachine: (bridge-880673) Calling .GetSSHUsername
	I1014 20:18:12.094921  421402 main.go:141] libmachine: (bridge-880673) Calling .GetSSHPort
	I1014 20:18:12.095102  421402 main.go:141] libmachine: (bridge-880673) Calling .GetSSHKeyPath
	I1014 20:18:12.095277  421402 main.go:141] libmachine: (bridge-880673) Calling .GetSSHUsername
	I1014 20:18:12.095286  421402 sshutil.go:53] new ssh client: &{IP:192.168.61.105 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21409-364627/.minikube/machines/bridge-880673/id_rsa Username:docker}
	I1014 20:18:12.095528  421402 sshutil.go:53] new ssh client: &{IP:192.168.61.105 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21409-364627/.minikube/machines/bridge-880673/id_rsa Username:docker}
	I1014 20:18:12.212043  421402 ssh_runner.go:195] Run: systemctl --version
	I1014 20:18:12.219839  421402 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1014 20:18:12.396741  421402 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1014 20:18:12.403858  421402 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1014 20:18:12.403971  421402 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1014 20:18:12.432004  421402 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1014 20:18:12.432033  421402 start.go:495] detecting cgroup driver to use...
	I1014 20:18:12.432099  421402 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1014 20:18:12.461465  421402 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1014 20:18:12.490577  421402 docker.go:218] disabling cri-docker service (if available) ...
	I1014 20:18:12.490671  421402 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1014 20:18:12.520721  421402 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1014 20:18:12.539982  421402 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1014 20:18:12.718589  421402 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1014 20:18:12.954522  421402 docker.go:234] disabling docker service ...
	I1014 20:18:12.954602  421402 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1014 20:18:12.974008  421402 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1014 20:18:12.992039  421402 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1014 20:18:13.176121  421402 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1014 20:18:13.329738  421402 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1014 20:18:13.350383  421402 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1014 20:18:13.379020  421402 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1014 20:18:13.379096  421402 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 20:18:13.393521  421402 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1014 20:18:13.393622  421402 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 20:18:13.408132  421402 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 20:18:13.424356  421402 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 20:18:13.438067  421402 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1014 20:18:13.454323  421402 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 20:18:13.468678  421402 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 20:18:13.492652  421402 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 20:18:13.506834  421402 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1014 20:18:13.518520  421402 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1014 20:18:13.518601  421402 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1014 20:18:13.539443  421402 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1014 20:18:13.553362  421402 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 20:18:13.714608  421402 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1014 20:18:13.849560  421402 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1014 20:18:13.849657  421402 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1014 20:18:13.856369  421402 start.go:563] Will wait 60s for crictl version
	I1014 20:18:13.856447  421402 ssh_runner.go:195] Run: which crictl
	I1014 20:18:13.861030  421402 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1014 20:18:13.908761  421402 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1014 20:18:13.908888  421402 ssh_runner.go:195] Run: crio --version
	I1014 20:18:13.943901  421402 ssh_runner.go:195] Run: crio --version
	I1014 20:18:13.977258  421402 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.29.1 ...
	I1014 20:18:11.554055  421087 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1014 20:18:11.560813  421087 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1014 20:18:11.560837  421087 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (4415 bytes)
	I1014 20:18:11.588535  421087 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1014 20:18:12.126533  421087 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1014 20:18:12.126608  421087 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 20:18:12.126681  421087 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes flannel-880673 minikube.k8s.io/updated_at=2025_10_14T20_18_12_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=93e78e130b0f4e4236c23940daf4ba8b68c76662 minikube.k8s.io/name=flannel-880673 minikube.k8s.io/primary=true
	I1014 20:18:12.315145  421087 ops.go:34] apiserver oom_adj: -16
	I1014 20:18:12.315188  421087 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 20:18:12.816025  421087 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 20:18:13.315604  421087 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 20:18:13.815525  421087 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 20:18:14.315435  421087 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 20:18:14.816250  421087 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 20:18:15.316274  421087 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 20:18:15.815903  421087 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 20:18:16.315968  421087 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 20:18:16.454604  421087 kubeadm.go:1113] duration metric: took 4.328060098s to wait for elevateKubeSystemPrivileges
	I1014 20:18:16.454643  421087 kubeadm.go:402] duration metric: took 17.541607536s to StartCluster
	I1014 20:18:16.454664  421087 settings.go:142] acquiring lock: {Name:mkb488b5c777750ffd68a70b951fb5c68c216ad7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 20:18:16.454735  421087 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21409-364627/kubeconfig
	I1014 20:18:16.456623  421087 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-364627/kubeconfig: {Name:mkd77480770143eefcec102bea219ed6716fd3f5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 20:18:16.456921  421087 start.go:235] Will wait 15m0s for node &{Name: IP:192.168.39.78 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1014 20:18:16.457029  421087 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1014 20:18:16.457329  421087 config.go:182] Loaded profile config "flannel-880673": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1014 20:18:16.457372  421087 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1014 20:18:16.457439  421087 addons.go:69] Setting storage-provisioner=true in profile "flannel-880673"
	I1014 20:18:16.457455  421087 addons.go:238] Setting addon storage-provisioner=true in "flannel-880673"
	I1014 20:18:16.457481  421087 host.go:66] Checking if "flannel-880673" exists ...
	I1014 20:18:16.457858  421087 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 20:18:16.457879  421087 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 20:18:16.457981  421087 addons.go:69] Setting default-storageclass=true in profile "flannel-880673"
	I1014 20:18:16.458001  421087 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "flannel-880673"
	I1014 20:18:16.458307  421087 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 20:18:16.458363  421087 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 20:18:16.463489  421087 out.go:179] * Verifying Kubernetes components...
	I1014 20:18:16.465079  421087 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 20:18:16.478450  421087 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45339
	I1014 20:18:16.478459  421087 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39093
	I1014 20:18:16.479191  421087 main.go:141] libmachine: () Calling .GetVersion
	I1014 20:18:16.479396  421087 main.go:141] libmachine: () Calling .GetVersion
	I1014 20:18:16.479983  421087 main.go:141] libmachine: Using API Version  1
	I1014 20:18:16.480011  421087 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 20:18:16.480161  421087 main.go:141] libmachine: Using API Version  1
	I1014 20:18:16.480183  421087 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 20:18:16.480477  421087 main.go:141] libmachine: () Calling .GetMachineName
	I1014 20:18:16.480791  421087 main.go:141] libmachine: () Calling .GetMachineName
	I1014 20:18:16.481287  421087 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 20:18:16.481344  421087 main.go:141] libmachine: (flannel-880673) Calling .GetState
	I1014 20:18:16.481500  421087 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 20:18:16.491227  421087 addons.go:238] Setting addon default-storageclass=true in "flannel-880673"
	I1014 20:18:16.491426  421087 host.go:66] Checking if "flannel-880673" exists ...
	I1014 20:18:16.491952  421087 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 20:18:16.492089  421087 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 20:18:16.505488  421087 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37335
	I1014 20:18:16.507142  421087 main.go:141] libmachine: () Calling .GetVersion
	I1014 20:18:16.507763  421087 main.go:141] libmachine: Using API Version  1
	I1014 20:18:16.507785  421087 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 20:18:16.508447  421087 main.go:141] libmachine: () Calling .GetMachineName
	I1014 20:18:16.508733  421087 main.go:141] libmachine: (flannel-880673) Calling .GetState
	I1014 20:18:16.512174  421087 main.go:141] libmachine: (flannel-880673) Calling .DriverName
	I1014 20:18:16.516578  421087 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	W1014 20:18:12.636730  418230 pod_ready.go:104] pod "coredns-66bc5c9577-489jr" is not "Ready", error: <nil>
	W1014 20:18:14.638888  418230 pod_ready.go:104] pod "coredns-66bc5c9577-489jr" is not "Ready", error: <nil>
	W1014 20:18:16.641600  418230 pod_ready.go:104] pod "coredns-66bc5c9577-489jr" is not "Ready", error: <nil>
	I1014 20:18:13.978473  421402 main.go:141] libmachine: (bridge-880673) Calling .GetIP
	I1014 20:18:13.981686  421402 main.go:141] libmachine: (bridge-880673) DBG | domain bridge-880673 has defined MAC address 52:54:00:21:00:20 in network mk-bridge-880673
	I1014 20:18:13.982133  421402 main.go:141] libmachine: (bridge-880673) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:00:20", ip: ""} in network mk-bridge-880673: {Iface:virbr3 ExpiryTime:2025-10-14 21:18:07 +0000 UTC Type:0 Mac:52:54:00:21:00:20 Iaid: IPaddr:192.168.61.105 Prefix:24 Hostname:bridge-880673 Clientid:01:52:54:00:21:00:20}
	I1014 20:18:13.982173  421402 main.go:141] libmachine: (bridge-880673) DBG | domain bridge-880673 has defined IP address 192.168.61.105 and MAC address 52:54:00:21:00:20 in network mk-bridge-880673
	I1014 20:18:13.982419  421402 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I1014 20:18:13.987171  421402 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1014 20:18:14.003746  421402 kubeadm.go:883] updating cluster {Name:bridge-880673 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.
1 ClusterName:bridge-880673 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP:192.168.61.105 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirr
or: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1014 20:18:14.003909  421402 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1014 20:18:14.003984  421402 ssh_runner.go:195] Run: sudo crictl images --output json
	I1014 20:18:14.045704  421402 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.34.1". assuming images are not preloaded.
	I1014 20:18:14.045797  421402 ssh_runner.go:195] Run: which lz4
	I1014 20:18:14.050596  421402 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1014 20:18:14.055602  421402 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1014 20:18:14.055637  421402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-364627/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (409477533 bytes)
	I1014 20:18:15.683347  421402 crio.go:462] duration metric: took 1.632759736s to copy over tarball
	I1014 20:18:15.683458  421402 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1014 20:18:17.734331  421402 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.050821196s)
	I1014 20:18:17.734369  421402 crio.go:469] duration metric: took 2.050979566s to extract the tarball
	I1014 20:18:17.734381  421402 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1014 20:18:17.779609  421402 ssh_runner.go:195] Run: sudo crictl images --output json
	I1014 20:18:17.833745  421402 crio.go:514] all images are preloaded for cri-o runtime.
	I1014 20:18:17.833783  421402 cache_images.go:85] Images are preloaded, skipping loading
	I1014 20:18:17.833794  421402 kubeadm.go:934] updating node { 192.168.61.105 8443 v1.34.1 crio true true} ...
	I1014 20:18:17.833949  421402 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=bridge-880673 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.105
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:bridge-880673 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge}
	I1014 20:18:17.834056  421402 ssh_runner.go:195] Run: crio config
	I1014 20:18:17.902568  421402 cni.go:84] Creating CNI manager for "bridge"
	I1014 20:18:17.902614  421402 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1014 20:18:17.902643  421402 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.105 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:bridge-880673 NodeName:bridge-880673 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.105"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.105 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1014 20:18:17.902870  421402 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.105
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "bridge-880673"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.61.105"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.105"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1014 20:18:17.902951  421402 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1014 20:18:17.916601  421402 binaries.go:44] Found k8s binaries, skipping transfer
	I1014 20:18:17.916685  421402 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1014 20:18:17.931432  421402 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I1014 20:18:17.955257  421402 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1014 20:18:17.976773  421402 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2216 bytes)
	I1014 20:18:16.518868  421087 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43467
	I1014 20:18:16.519156  421087 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1014 20:18:16.519189  421087 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1014 20:18:16.519216  421087 main.go:141] libmachine: (flannel-880673) Calling .GetSSHHostname
	I1014 20:18:16.519568  421087 main.go:141] libmachine: () Calling .GetVersion
	I1014 20:18:16.520256  421087 main.go:141] libmachine: Using API Version  1
	I1014 20:18:16.520366  421087 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 20:18:16.520848  421087 main.go:141] libmachine: () Calling .GetMachineName
	I1014 20:18:16.521723  421087 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 20:18:16.521776  421087 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 20:18:16.524743  421087 main.go:141] libmachine: (flannel-880673) DBG | domain flannel-880673 has defined MAC address 52:54:00:d6:0d:31 in network mk-flannel-880673
	I1014 20:18:16.525376  421087 main.go:141] libmachine: (flannel-880673) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:0d:31", ip: ""} in network mk-flannel-880673: {Iface:virbr2 ExpiryTime:2025-10-14 21:17:46 +0000 UTC Type:0 Mac:52:54:00:d6:0d:31 Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:flannel-880673 Clientid:01:52:54:00:d6:0d:31}
	I1014 20:18:16.525430  421087 main.go:141] libmachine: (flannel-880673) DBG | domain flannel-880673 has defined IP address 192.168.39.78 and MAC address 52:54:00:d6:0d:31 in network mk-flannel-880673
	I1014 20:18:16.525459  421087 main.go:141] libmachine: (flannel-880673) Calling .GetSSHPort
	I1014 20:18:16.525706  421087 main.go:141] libmachine: (flannel-880673) Calling .GetSSHKeyPath
	I1014 20:18:16.525936  421087 main.go:141] libmachine: (flannel-880673) Calling .GetSSHUsername
	I1014 20:18:16.526249  421087 sshutil.go:53] new ssh client: &{IP:192.168.39.78 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21409-364627/.minikube/machines/flannel-880673/id_rsa Username:docker}
	I1014 20:18:16.543071  421087 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38609
	I1014 20:18:16.543723  421087 main.go:141] libmachine: () Calling .GetVersion
	I1014 20:18:16.544379  421087 main.go:141] libmachine: Using API Version  1
	I1014 20:18:16.544416  421087 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 20:18:16.544902  421087 main.go:141] libmachine: () Calling .GetMachineName
	I1014 20:18:16.545161  421087 main.go:141] libmachine: (flannel-880673) Calling .GetState
	I1014 20:18:16.547746  421087 main.go:141] libmachine: (flannel-880673) Calling .DriverName
	I1014 20:18:16.547986  421087 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1014 20:18:16.548005  421087 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1014 20:18:16.548027  421087 main.go:141] libmachine: (flannel-880673) Calling .GetSSHHostname
	I1014 20:18:16.552546  421087 main.go:141] libmachine: (flannel-880673) DBG | domain flannel-880673 has defined MAC address 52:54:00:d6:0d:31 in network mk-flannel-880673
	I1014 20:18:16.553146  421087 main.go:141] libmachine: (flannel-880673) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:0d:31", ip: ""} in network mk-flannel-880673: {Iface:virbr2 ExpiryTime:2025-10-14 21:17:46 +0000 UTC Type:0 Mac:52:54:00:d6:0d:31 Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:flannel-880673 Clientid:01:52:54:00:d6:0d:31}
	I1014 20:18:16.553179  421087 main.go:141] libmachine: (flannel-880673) DBG | domain flannel-880673 has defined IP address 192.168.39.78 and MAC address 52:54:00:d6:0d:31 in network mk-flannel-880673
	I1014 20:18:16.553702  421087 main.go:141] libmachine: (flannel-880673) Calling .GetSSHPort
	I1014 20:18:16.553918  421087 main.go:141] libmachine: (flannel-880673) Calling .GetSSHKeyPath
	I1014 20:18:16.554145  421087 main.go:141] libmachine: (flannel-880673) Calling .GetSSHUsername
	I1014 20:18:16.554417  421087 sshutil.go:53] new ssh client: &{IP:192.168.39.78 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21409-364627/.minikube/machines/flannel-880673/id_rsa Username:docker}
	I1014 20:18:16.692451  421087 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1014 20:18:16.853395  421087 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1014 20:18:17.217753  421087 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1014 20:18:17.236857  421087 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1014 20:18:17.687457  421087 start.go:976] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1014 20:18:17.687804  421087 main.go:141] libmachine: Making call to close driver server
	I1014 20:18:17.687826  421087 main.go:141] libmachine: (flannel-880673) Calling .Close
	I1014 20:18:17.688344  421087 main.go:141] libmachine: (flannel-880673) DBG | Closing plugin on server side
	I1014 20:18:17.688399  421087 main.go:141] libmachine: Successfully made call to close driver server
	I1014 20:18:17.688407  421087 main.go:141] libmachine: Making call to close connection to plugin binary
	I1014 20:18:17.688417  421087 main.go:141] libmachine: Making call to close driver server
	I1014 20:18:17.688426  421087 main.go:141] libmachine: (flannel-880673) Calling .Close
	I1014 20:18:17.688755  421087 main.go:141] libmachine: Successfully made call to close driver server
	I1014 20:18:17.688770  421087 main.go:141] libmachine: Making call to close connection to plugin binary
	I1014 20:18:17.689423  421087 node_ready.go:35] waiting up to 15m0s for node "flannel-880673" to be "Ready" ...
	I1014 20:18:17.718251  421087 main.go:141] libmachine: Making call to close driver server
	I1014 20:18:17.718276  421087 main.go:141] libmachine: (flannel-880673) Calling .Close
	I1014 20:18:17.718584  421087 main.go:141] libmachine: Successfully made call to close driver server
	I1014 20:18:17.718604  421087 main.go:141] libmachine: Making call to close connection to plugin binary
	I1014 20:18:18.003907  421087 main.go:141] libmachine: Making call to close driver server
	I1014 20:18:18.003930  421087 main.go:141] libmachine: (flannel-880673) Calling .Close
	I1014 20:18:18.004266  421087 main.go:141] libmachine: Successfully made call to close driver server
	I1014 20:18:18.004288  421087 main.go:141] libmachine: Making call to close connection to plugin binary
	I1014 20:18:18.004304  421087 main.go:141] libmachine: Making call to close driver server
	I1014 20:18:18.004324  421087 main.go:141] libmachine: (flannel-880673) Calling .Close
	I1014 20:18:18.004625  421087 main.go:141] libmachine: (flannel-880673) DBG | Closing plugin on server side
	I1014 20:18:18.004672  421087 main.go:141] libmachine: Successfully made call to close driver server
	I1014 20:18:18.004689  421087 main.go:141] libmachine: Making call to close connection to plugin binary
	I1014 20:18:18.006767  421087 out.go:179] * Enabled addons: default-storageclass, storage-provisioner
	I1014 20:18:18.001420  421402 ssh_runner.go:195] Run: grep 192.168.61.105	control-plane.minikube.internal$ /etc/hosts
	I1014 20:18:18.006728  421402 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.105	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1014 20:18:18.023574  421402 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 20:18:18.187741  421402 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1014 20:18:18.228788  421402 certs.go:69] Setting up /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/bridge-880673 for IP: 192.168.61.105
	I1014 20:18:18.228812  421402 certs.go:195] generating shared ca certs ...
	I1014 20:18:18.228834  421402 certs.go:227] acquiring lock for ca certs: {Name:mkddeaa8fb7f14aff32554669329c3967650976a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 20:18:18.228995  421402 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21409-364627/.minikube/ca.key
	I1014 20:18:18.229040  421402 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21409-364627/.minikube/proxy-client-ca.key
	I1014 20:18:18.229047  421402 certs.go:257] generating profile certs ...
	I1014 20:18:18.229096  421402 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/bridge-880673/client.key
	I1014 20:18:18.229110  421402 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/bridge-880673/client.crt with IP's: []
	I1014 20:18:18.398166  421402 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/bridge-880673/client.crt ...
	I1014 20:18:18.398200  421402 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/bridge-880673/client.crt: {Name:mk595ad0b234ff7452ec47aa1d9be0f57df00f3b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 20:18:18.398397  421402 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/bridge-880673/client.key ...
	I1014 20:18:18.398414  421402 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/bridge-880673/client.key: {Name:mk2a01d027ec022340d98e24a988207f5bf3eecc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 20:18:18.398551  421402 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/bridge-880673/apiserver.key.9bc94a14
	I1014 20:18:18.398571  421402 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/bridge-880673/apiserver.crt.9bc94a14 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.61.105]
	I1014 20:18:18.722080  421402 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/bridge-880673/apiserver.crt.9bc94a14 ...
	I1014 20:18:18.722114  421402 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/bridge-880673/apiserver.crt.9bc94a14: {Name:mk73c594db5b49ecd1f5ae89daf3677a9c0b1176 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 20:18:18.722308  421402 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/bridge-880673/apiserver.key.9bc94a14 ...
	I1014 20:18:18.722348  421402 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/bridge-880673/apiserver.key.9bc94a14: {Name:mk256bd645292252d9623f1c66667da60f375e93 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 20:18:18.722451  421402 certs.go:382] copying /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/bridge-880673/apiserver.crt.9bc94a14 -> /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/bridge-880673/apiserver.crt
	I1014 20:18:18.722550  421402 certs.go:386] copying /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/bridge-880673/apiserver.key.9bc94a14 -> /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/bridge-880673/apiserver.key
	I1014 20:18:18.722623  421402 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/bridge-880673/proxy-client.key
	I1014 20:18:18.722639  421402 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/bridge-880673/proxy-client.crt with IP's: []
	I1014 20:18:18.952984  421402 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/bridge-880673/proxy-client.crt ...
	I1014 20:18:18.953017  421402 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/bridge-880673/proxy-client.crt: {Name:mk32032be198c8c46cdac767e584ac6bc5628c2a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 20:18:18.953215  421402 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/bridge-880673/proxy-client.key ...
	I1014 20:18:18.953231  421402 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/bridge-880673/proxy-client.key: {Name:mk61cb8addb3b895a4ab57106477a1490ec60125 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 20:18:18.953815  421402 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-364627/.minikube/certs/368634.pem (1338 bytes)
	W1014 20:18:18.953892  421402 certs.go:480] ignoring /home/jenkins/minikube-integration/21409-364627/.minikube/certs/368634_empty.pem, impossibly tiny 0 bytes
	I1014 20:18:18.953904  421402 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-364627/.minikube/certs/ca-key.pem (1675 bytes)
	I1014 20:18:18.953953  421402 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-364627/.minikube/certs/ca.pem (1082 bytes)
	I1014 20:18:18.953988  421402 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-364627/.minikube/certs/cert.pem (1123 bytes)
	I1014 20:18:18.954014  421402 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-364627/.minikube/certs/key.pem (1675 bytes)
	I1014 20:18:18.954063  421402 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-364627/.minikube/files/etc/ssl/certs/3686342.pem (1708 bytes)
	I1014 20:18:18.955513  421402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-364627/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1014 20:18:19.005900  421402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-364627/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1014 20:18:19.041417  421402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-364627/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1014 20:18:19.072123  421402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-364627/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1014 20:18:19.106421  421402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/bridge-880673/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1014 20:18:19.138409  421402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/bridge-880673/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1014 20:18:19.170789  421402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/bridge-880673/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1014 20:18:19.202103  421402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/bridge-880673/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1014 20:18:19.235753  421402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-364627/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1014 20:18:19.268088  421402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-364627/.minikube/certs/368634.pem --> /usr/share/ca-certificates/368634.pem (1338 bytes)
	I1014 20:18:19.298019  421402 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-364627/.minikube/files/etc/ssl/certs/3686342.pem --> /usr/share/ca-certificates/3686342.pem (1708 bytes)
	I1014 20:18:19.328880  421402 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1014 20:18:19.352044  421402 ssh_runner.go:195] Run: openssl version
	I1014 20:18:19.359404  421402 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1014 20:18:19.372942  421402 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1014 20:18:19.379120  421402 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 14 19:11 /usr/share/ca-certificates/minikubeCA.pem
	I1014 20:18:19.379193  421402 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1014 20:18:19.386935  421402 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1014 20:18:19.401507  421402 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/368634.pem && ln -fs /usr/share/ca-certificates/368634.pem /etc/ssl/certs/368634.pem"
	I1014 20:18:19.415930  421402 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/368634.pem
	I1014 20:18:19.421263  421402 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 14 19:18 /usr/share/ca-certificates/368634.pem
	I1014 20:18:19.421350  421402 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/368634.pem
	I1014 20:18:19.428985  421402 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/368634.pem /etc/ssl/certs/51391683.0"
	I1014 20:18:19.443684  421402 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3686342.pem && ln -fs /usr/share/ca-certificates/3686342.pem /etc/ssl/certs/3686342.pem"
	I1014 20:18:19.457617  421402 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3686342.pem
	I1014 20:18:19.463583  421402 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 14 19:18 /usr/share/ca-certificates/3686342.pem
	I1014 20:18:19.463673  421402 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3686342.pem
	I1014 20:18:19.471173  421402 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3686342.pem /etc/ssl/certs/3ec20f2e.0"
	I1014 20:18:19.486280  421402 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1014 20:18:19.491757  421402 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1014 20:18:19.491833  421402 kubeadm.go:400] StartCluster: {Name:bridge-880673 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 C
lusterName:bridge-880673 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP:192.168.61.105 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1014 20:18:19.491916  421402 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1014 20:18:19.491967  421402 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1014 20:18:19.533746  421402 cri.go:89] found id: ""
	I1014 20:18:19.533842  421402 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1014 20:18:19.546535  421402 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1014 20:18:19.558793  421402 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1014 20:18:19.571345  421402 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1014 20:18:19.571367  421402 kubeadm.go:157] found existing configuration files:
	
	I1014 20:18:19.571414  421402 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1014 20:18:19.582436  421402 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1014 20:18:19.582513  421402 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1014 20:18:19.595295  421402 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1014 20:18:19.606706  421402 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1014 20:18:19.606792  421402 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1014 20:18:19.619650  421402 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1014 20:18:19.631416  421402 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1014 20:18:19.631489  421402 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1014 20:18:19.647609  421402 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1014 20:18:19.660158  421402 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1014 20:18:19.660231  421402 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1014 20:18:19.672633  421402 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1014 20:18:19.734127  421402 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1014 20:18:19.734930  421402 kubeadm.go:318] [preflight] Running pre-flight checks
	I1014 20:18:19.834485  421402 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1014 20:18:19.834705  421402 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1014 20:18:19.834838  421402 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1014 20:18:19.845500  421402 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1014 20:18:18.011120  421087 addons.go:514] duration metric: took 1.553739803s for enable addons: enabled=[default-storageclass storage-provisioner]
	I1014 20:18:18.194030  421087 kapi.go:214] "coredns" deployment in "kube-system" namespace and "flannel-880673" context rescaled to 1 replicas
	W1014 20:18:19.694434  421087 node_ready.go:57] node "flannel-880673" has "Ready":"False" status (will retry)
	W1014 20:18:19.135894  418230 pod_ready.go:104] pod "coredns-66bc5c9577-489jr" is not "Ready", error: <nil>
	W1014 20:18:21.135981  418230 pod_ready.go:104] pod "coredns-66bc5c9577-489jr" is not "Ready", error: <nil>
	I1014 20:18:20.019215  421402 out.go:252]   - Generating certificates and keys ...
	I1014 20:18:20.019364  421402 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1014 20:18:20.019450  421402 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1014 20:18:20.019568  421402 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1014 20:18:20.423172  421402 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1014 20:18:20.565417  421402 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1014 20:18:20.831800  421402 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1014 20:18:20.908719  421402 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1014 20:18:20.908947  421402 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [bridge-880673 localhost] and IPs [192.168.61.105 127.0.0.1 ::1]
	I1014 20:18:21.163287  421402 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1014 20:18:21.163517  421402 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [bridge-880673 localhost] and IPs [192.168.61.105 127.0.0.1 ::1]
	I1014 20:18:21.612720  421402 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1014 20:18:21.653202  421402 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1014 20:18:21.915336  421402 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1014 20:18:21.915538  421402 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1014 20:18:22.075178  421402 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1014 20:18:22.517322  421402 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1014 20:18:22.836783  421402 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1014 20:18:23.000293  421402 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1014 20:18:23.290416  421402 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1014 20:18:23.290921  421402 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1014 20:18:23.293499  421402 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1014 20:18:22.138302  418230 pod_ready.go:94] pod "coredns-66bc5c9577-489jr" is "Ready"
	I1014 20:18:22.138381  418230 pod_ready.go:86] duration metric: took 36.508591303s for pod "coredns-66bc5c9577-489jr" in "kube-system" namespace to be "Ready" or be gone ...
	I1014 20:18:22.142498  418230 pod_ready.go:83] waiting for pod "etcd-enable-default-cni-880673" in "kube-system" namespace to be "Ready" or be gone ...
	I1014 20:18:22.147755  418230 pod_ready.go:94] pod "etcd-enable-default-cni-880673" is "Ready"
	I1014 20:18:22.147785  418230 pod_ready.go:86] duration metric: took 5.253572ms for pod "etcd-enable-default-cni-880673" in "kube-system" namespace to be "Ready" or be gone ...
	I1014 20:18:22.150369  418230 pod_ready.go:83] waiting for pod "kube-apiserver-enable-default-cni-880673" in "kube-system" namespace to be "Ready" or be gone ...
	I1014 20:18:22.155001  418230 pod_ready.go:94] pod "kube-apiserver-enable-default-cni-880673" is "Ready"
	I1014 20:18:22.155028  418230 pod_ready.go:86] duration metric: took 4.637349ms for pod "kube-apiserver-enable-default-cni-880673" in "kube-system" namespace to be "Ready" or be gone ...
	I1014 20:18:22.158035  418230 pod_ready.go:83] waiting for pod "kube-controller-manager-enable-default-cni-880673" in "kube-system" namespace to be "Ready" or be gone ...
	I1014 20:18:22.335464  418230 pod_ready.go:94] pod "kube-controller-manager-enable-default-cni-880673" is "Ready"
	I1014 20:18:22.335500  418230 pod_ready.go:86] duration metric: took 177.43826ms for pod "kube-controller-manager-enable-default-cni-880673" in "kube-system" namespace to be "Ready" or be gone ...
	I1014 20:18:22.535176  418230 pod_ready.go:83] waiting for pod "kube-proxy-qm5zb" in "kube-system" namespace to be "Ready" or be gone ...
	I1014 20:18:22.935782  418230 pod_ready.go:94] pod "kube-proxy-qm5zb" is "Ready"
	I1014 20:18:22.935816  418230 pod_ready.go:86] duration metric: took 400.604632ms for pod "kube-proxy-qm5zb" in "kube-system" namespace to be "Ready" or be gone ...
	I1014 20:18:23.135483  418230 pod_ready.go:83] waiting for pod "kube-scheduler-enable-default-cni-880673" in "kube-system" namespace to be "Ready" or be gone ...
	I1014 20:18:23.536595  418230 pod_ready.go:94] pod "kube-scheduler-enable-default-cni-880673" is "Ready"
	I1014 20:18:23.536634  418230 pod_ready.go:86] duration metric: took 401.119182ms for pod "kube-scheduler-enable-default-cni-880673" in "kube-system" namespace to be "Ready" or be gone ...
	I1014 20:18:23.536650  418230 pod_ready.go:40] duration metric: took 37.911908956s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1014 20:18:23.584605  418230 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1014 20:18:23.587307  418230 out.go:179] * Done! kubectl is now configured to use "enable-default-cni-880673" cluster and "default" namespace by default
	W1014 20:18:23.592178  418230 root.go:91] failed to log command end to audit: failed to find a log row with id equals to 70a75ecc-5e4e-4ac8-9720-1b3d7c8fcb5b
	W1014 20:18:22.193864  421087 node_ready.go:57] node "flannel-880673" has "Ready":"False" status (will retry)
	W1014 20:18:24.694222  421087 node_ready.go:57] node "flannel-880673" has "Ready":"False" status (will retry)
	I1014 20:18:23.295285  421402 out.go:252]   - Booting up control plane ...
	I1014 20:18:23.295424  421402 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1014 20:18:23.295536  421402 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1014 20:18:23.295633  421402 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1014 20:18:23.320806  421402 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1014 20:18:23.320994  421402 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1014 20:18:23.329177  421402 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1014 20:18:23.329279  421402 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1014 20:18:23.330288  421402 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1014 20:18:23.529018  421402 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1014 20:18:23.529195  421402 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1014 20:18:24.530359  421402 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.001885541s
	I1014 20:18:24.535053  421402 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1014 20:18:24.535202  421402 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.61.105:8443/livez
	I1014 20:18:24.535336  421402 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1014 20:18:24.535453  421402 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	W1014 20:18:26.694542  421087 node_ready.go:57] node "flannel-880673" has "Ready":"False" status (will retry)
	W1014 20:18:29.194905  421087 node_ready.go:57] node "flannel-880673" has "Ready":"False" status (will retry)
	I1014 20:18:28.543638  421402 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 4.010215569s
	I1014 20:18:29.650178  421402 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 5.117608499s
	I1014 20:18:31.034085  421402 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 6.501687487s
	I1014 20:18:31.053112  421402 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1014 20:18:31.082721  421402 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1014 20:18:31.110576  421402 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1014 20:18:31.110938  421402 kubeadm.go:318] [mark-control-plane] Marking the node bridge-880673 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1014 20:18:31.135025  421402 kubeadm.go:318] [bootstrap-token] Using token: toe6ef.s59wh81d0jyqrdao
	I1014 20:18:31.136116  421402 out.go:252]   - Configuring RBAC rules ...
	I1014 20:18:31.136263  421402 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1014 20:18:31.150468  421402 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1014 20:18:31.166893  421402 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1014 20:18:31.173525  421402 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1014 20:18:31.180987  421402 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1014 20:18:31.190512  421402 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1014 20:18:31.446174  421402 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1014 20:18:31.914059  421402 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1014 20:18:32.444243  421402 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1014 20:18:32.445280  421402 kubeadm.go:318] 
	I1014 20:18:32.445407  421402 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1014 20:18:32.445432  421402 kubeadm.go:318] 
	I1014 20:18:32.445533  421402 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1014 20:18:32.445545  421402 kubeadm.go:318] 
	I1014 20:18:32.445582  421402 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1014 20:18:32.445669  421402 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1014 20:18:32.445737  421402 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1014 20:18:32.445745  421402 kubeadm.go:318] 
	I1014 20:18:32.445817  421402 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1014 20:18:32.445826  421402 kubeadm.go:318] 
	I1014 20:18:32.445892  421402 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1014 20:18:32.445902  421402 kubeadm.go:318] 
	I1014 20:18:32.445963  421402 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1014 20:18:32.446060  421402 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1014 20:18:32.446157  421402 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1014 20:18:32.446168  421402 kubeadm.go:318] 
	I1014 20:18:32.446298  421402 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1014 20:18:32.446440  421402 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1014 20:18:32.446452  421402 kubeadm.go:318] 
	I1014 20:18:32.446585  421402 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token toe6ef.s59wh81d0jyqrdao \
	I1014 20:18:32.446682  421402 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:a62c4f11982899337eae6d2ac06abdf2a9e3ffb256514381b9172598c708cb1d \
	I1014 20:18:32.446716  421402 kubeadm.go:318] 	--control-plane 
	I1014 20:18:32.446728  421402 kubeadm.go:318] 
	I1014 20:18:32.446870  421402 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1014 20:18:32.446882  421402 kubeadm.go:318] 
	I1014 20:18:32.446998  421402 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token toe6ef.s59wh81d0jyqrdao \
	I1014 20:18:32.447153  421402 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:a62c4f11982899337eae6d2ac06abdf2a9e3ffb256514381b9172598c708cb1d 
	I1014 20:18:32.448453  421402 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1014 20:18:32.448489  421402 cni.go:84] Creating CNI manager for "bridge"
	I1014 20:18:32.450052  421402 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I1014 20:18:32.451634  421402 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1014 20:18:32.474124  421402 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1014 20:18:32.499680  421402 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1014 20:18:32.499764  421402 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 20:18:32.499785  421402 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes bridge-880673 minikube.k8s.io/updated_at=2025_10_14T20_18_32_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=93e78e130b0f4e4236c23940daf4ba8b68c76662 minikube.k8s.io/name=bridge-880673 minikube.k8s.io/primary=true
	I1014 20:18:32.645374  421402 ops.go:34] apiserver oom_adj: -16
	I1014 20:18:32.645491  421402 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 20:18:30.198516  421087 node_ready.go:49] node "flannel-880673" is "Ready"
	I1014 20:18:30.198554  421087 node_ready.go:38] duration metric: took 12.509099505s for node "flannel-880673" to be "Ready" ...
	I1014 20:18:30.198569  421087 api_server.go:52] waiting for apiserver process to appear ...
	I1014 20:18:30.198637  421087 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 20:18:30.308625  421087 api_server.go:72] duration metric: took 13.851661115s to wait for apiserver process to appear ...
	I1014 20:18:30.308663  421087 api_server.go:88] waiting for apiserver healthz status ...
	I1014 20:18:30.308691  421087 api_server.go:253] Checking apiserver healthz at https://192.168.39.78:8443/healthz ...
	I1014 20:18:30.323576  421087 api_server.go:279] https://192.168.39.78:8443/healthz returned 200:
	ok
	I1014 20:18:30.326433  421087 api_server.go:141] control plane version: v1.34.1
	I1014 20:18:30.326470  421087 api_server.go:131] duration metric: took 17.796983ms to wait for apiserver health ...
	I1014 20:18:30.326481  421087 system_pods.go:43] waiting for kube-system pods to appear ...
	I1014 20:18:30.348628  421087 system_pods.go:59] 7 kube-system pods found
	I1014 20:18:30.348690  421087 system_pods.go:61] "coredns-66bc5c9577-t5q7c" [fd900037-32a6-4811-8a8c-134147706022] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1014 20:18:30.348704  421087 system_pods.go:61] "etcd-flannel-880673" [7a1195b6-1bf1-40a7-9aa9-cc2d5dafe2b9] Running
	I1014 20:18:30.348713  421087 system_pods.go:61] "kube-apiserver-flannel-880673" [2ab1b350-0f52-49dd-8282-546cd96104f0] Running
	I1014 20:18:30.348720  421087 system_pods.go:61] "kube-controller-manager-flannel-880673" [77718da6-91bf-4c39-936e-5acc7b6afb65] Running
	I1014 20:18:30.348726  421087 system_pods.go:61] "kube-proxy-js9r5" [f076c7d6-4d12-413f-bc46-f144270c72b2] Running
	I1014 20:18:30.348732  421087 system_pods.go:61] "kube-scheduler-flannel-880673" [3b779757-7765-4ed7-a710-794d57550f16] Running
	I1014 20:18:30.348745  421087 system_pods.go:61] "storage-provisioner" [3623d0c1-d2f8-45e3-b0f3-d09008f2fbc8] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1014 20:18:30.348756  421087 system_pods.go:74] duration metric: took 22.265823ms to wait for pod list to return data ...
	I1014 20:18:30.348774  421087 default_sa.go:34] waiting for default service account to be created ...
	I1014 20:18:30.366984  421087 default_sa.go:45] found service account: "default"
	I1014 20:18:30.367018  421087 default_sa.go:55] duration metric: took 18.234312ms for default service account to be created ...
	I1014 20:18:30.367034  421087 system_pods.go:116] waiting for k8s-apps to be running ...
	I1014 20:18:30.448504  421087 system_pods.go:86] 7 kube-system pods found
	I1014 20:18:30.448544  421087 system_pods.go:89] "coredns-66bc5c9577-t5q7c" [fd900037-32a6-4811-8a8c-134147706022] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1014 20:18:30.448552  421087 system_pods.go:89] "etcd-flannel-880673" [7a1195b6-1bf1-40a7-9aa9-cc2d5dafe2b9] Running
	I1014 20:18:30.448577  421087 system_pods.go:89] "kube-apiserver-flannel-880673" [2ab1b350-0f52-49dd-8282-546cd96104f0] Running
	I1014 20:18:30.448583  421087 system_pods.go:89] "kube-controller-manager-flannel-880673" [77718da6-91bf-4c39-936e-5acc7b6afb65] Running
	I1014 20:18:30.448589  421087 system_pods.go:89] "kube-proxy-js9r5" [f076c7d6-4d12-413f-bc46-f144270c72b2] Running
	I1014 20:18:30.448596  421087 system_pods.go:89] "kube-scheduler-flannel-880673" [3b779757-7765-4ed7-a710-794d57550f16] Running
	I1014 20:18:30.448605  421087 system_pods.go:89] "storage-provisioner" [3623d0c1-d2f8-45e3-b0f3-d09008f2fbc8] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1014 20:18:30.448654  421087 retry.go:31] will retry after 195.575996ms: missing components: kube-dns
	I1014 20:18:30.745034  421087 system_pods.go:86] 7 kube-system pods found
	I1014 20:18:30.745071  421087 system_pods.go:89] "coredns-66bc5c9577-t5q7c" [fd900037-32a6-4811-8a8c-134147706022] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1014 20:18:30.745077  421087 system_pods.go:89] "etcd-flannel-880673" [7a1195b6-1bf1-40a7-9aa9-cc2d5dafe2b9] Running
	I1014 20:18:30.745082  421087 system_pods.go:89] "kube-apiserver-flannel-880673" [2ab1b350-0f52-49dd-8282-546cd96104f0] Running
	I1014 20:18:30.745087  421087 system_pods.go:89] "kube-controller-manager-flannel-880673" [77718da6-91bf-4c39-936e-5acc7b6afb65] Running
	I1014 20:18:30.745090  421087 system_pods.go:89] "kube-proxy-js9r5" [f076c7d6-4d12-413f-bc46-f144270c72b2] Running
	I1014 20:18:30.745093  421087 system_pods.go:89] "kube-scheduler-flannel-880673" [3b779757-7765-4ed7-a710-794d57550f16] Running
	I1014 20:18:30.745098  421087 system_pods.go:89] "storage-provisioner" [3623d0c1-d2f8-45e3-b0f3-d09008f2fbc8] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1014 20:18:30.745115  421087 retry.go:31] will retry after 300.243195ms: missing components: kube-dns
	I1014 20:18:31.050699  421087 system_pods.go:86] 7 kube-system pods found
	I1014 20:18:31.050738  421087 system_pods.go:89] "coredns-66bc5c9577-t5q7c" [fd900037-32a6-4811-8a8c-134147706022] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1014 20:18:31.050748  421087 system_pods.go:89] "etcd-flannel-880673" [7a1195b6-1bf1-40a7-9aa9-cc2d5dafe2b9] Running
	I1014 20:18:31.050764  421087 system_pods.go:89] "kube-apiserver-flannel-880673" [2ab1b350-0f52-49dd-8282-546cd96104f0] Running
	I1014 20:18:31.050770  421087 system_pods.go:89] "kube-controller-manager-flannel-880673" [77718da6-91bf-4c39-936e-5acc7b6afb65] Running
	I1014 20:18:31.050776  421087 system_pods.go:89] "kube-proxy-js9r5" [f076c7d6-4d12-413f-bc46-f144270c72b2] Running
	I1014 20:18:31.050781  421087 system_pods.go:89] "kube-scheduler-flannel-880673" [3b779757-7765-4ed7-a710-794d57550f16] Running
	I1014 20:18:31.050811  421087 system_pods.go:89] "storage-provisioner" [3623d0c1-d2f8-45e3-b0f3-d09008f2fbc8] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1014 20:18:31.050835  421087 retry.go:31] will retry after 422.638473ms: missing components: kube-dns
	I1014 20:18:31.479212  421087 system_pods.go:86] 7 kube-system pods found
	I1014 20:18:31.479247  421087 system_pods.go:89] "coredns-66bc5c9577-t5q7c" [fd900037-32a6-4811-8a8c-134147706022] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1014 20:18:31.479253  421087 system_pods.go:89] "etcd-flannel-880673" [7a1195b6-1bf1-40a7-9aa9-cc2d5dafe2b9] Running
	I1014 20:18:31.479267  421087 system_pods.go:89] "kube-apiserver-flannel-880673" [2ab1b350-0f52-49dd-8282-546cd96104f0] Running
	I1014 20:18:31.479271  421087 system_pods.go:89] "kube-controller-manager-flannel-880673" [77718da6-91bf-4c39-936e-5acc7b6afb65] Running
	I1014 20:18:31.479274  421087 system_pods.go:89] "kube-proxy-js9r5" [f076c7d6-4d12-413f-bc46-f144270c72b2] Running
	I1014 20:18:31.479277  421087 system_pods.go:89] "kube-scheduler-flannel-880673" [3b779757-7765-4ed7-a710-794d57550f16] Running
	I1014 20:18:31.479287  421087 system_pods.go:89] "storage-provisioner" [3623d0c1-d2f8-45e3-b0f3-d09008f2fbc8] Running
	I1014 20:18:31.479305  421087 retry.go:31] will retry after 552.0673ms: missing components: kube-dns
	I1014 20:18:32.036669  421087 system_pods.go:86] 7 kube-system pods found
	I1014 20:18:32.036713  421087 system_pods.go:89] "coredns-66bc5c9577-t5q7c" [fd900037-32a6-4811-8a8c-134147706022] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1014 20:18:32.036723  421087 system_pods.go:89] "etcd-flannel-880673" [7a1195b6-1bf1-40a7-9aa9-cc2d5dafe2b9] Running
	I1014 20:18:32.036731  421087 system_pods.go:89] "kube-apiserver-flannel-880673" [2ab1b350-0f52-49dd-8282-546cd96104f0] Running
	I1014 20:18:32.036739  421087 system_pods.go:89] "kube-controller-manager-flannel-880673" [77718da6-91bf-4c39-936e-5acc7b6afb65] Running
	I1014 20:18:32.036745  421087 system_pods.go:89] "kube-proxy-js9r5" [f076c7d6-4d12-413f-bc46-f144270c72b2] Running
	I1014 20:18:32.036750  421087 system_pods.go:89] "kube-scheduler-flannel-880673" [3b779757-7765-4ed7-a710-794d57550f16] Running
	I1014 20:18:32.036757  421087 system_pods.go:89] "storage-provisioner" [3623d0c1-d2f8-45e3-b0f3-d09008f2fbc8] Running
	I1014 20:18:32.036779  421087 retry.go:31] will retry after 475.098529ms: missing components: kube-dns
	I1014 20:18:32.517112  421087 system_pods.go:86] 7 kube-system pods found
	I1014 20:18:32.517149  421087 system_pods.go:89] "coredns-66bc5c9577-t5q7c" [fd900037-32a6-4811-8a8c-134147706022] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1014 20:18:32.517155  421087 system_pods.go:89] "etcd-flannel-880673" [7a1195b6-1bf1-40a7-9aa9-cc2d5dafe2b9] Running
	I1014 20:18:32.517161  421087 system_pods.go:89] "kube-apiserver-flannel-880673" [2ab1b350-0f52-49dd-8282-546cd96104f0] Running
	I1014 20:18:32.517165  421087 system_pods.go:89] "kube-controller-manager-flannel-880673" [77718da6-91bf-4c39-936e-5acc7b6afb65] Running
	I1014 20:18:32.517169  421087 system_pods.go:89] "kube-proxy-js9r5" [f076c7d6-4d12-413f-bc46-f144270c72b2] Running
	I1014 20:18:32.517172  421087 system_pods.go:89] "kube-scheduler-flannel-880673" [3b779757-7765-4ed7-a710-794d57550f16] Running
	I1014 20:18:32.517176  421087 system_pods.go:89] "storage-provisioner" [3623d0c1-d2f8-45e3-b0f3-d09008f2fbc8] Running
	I1014 20:18:32.517197  421087 retry.go:31] will retry after 953.369281ms: missing components: kube-dns
	I1014 20:18:33.476303  421087 system_pods.go:86] 7 kube-system pods found
	I1014 20:18:33.476370  421087 system_pods.go:89] "coredns-66bc5c9577-t5q7c" [fd900037-32a6-4811-8a8c-134147706022] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1014 20:18:33.476377  421087 system_pods.go:89] "etcd-flannel-880673" [7a1195b6-1bf1-40a7-9aa9-cc2d5dafe2b9] Running
	I1014 20:18:33.476383  421087 system_pods.go:89] "kube-apiserver-flannel-880673" [2ab1b350-0f52-49dd-8282-546cd96104f0] Running
	I1014 20:18:33.476387  421087 system_pods.go:89] "kube-controller-manager-flannel-880673" [77718da6-91bf-4c39-936e-5acc7b6afb65] Running
	I1014 20:18:33.476392  421087 system_pods.go:89] "kube-proxy-js9r5" [f076c7d6-4d12-413f-bc46-f144270c72b2] Running
	I1014 20:18:33.476397  421087 system_pods.go:89] "kube-scheduler-flannel-880673" [3b779757-7765-4ed7-a710-794d57550f16] Running
	I1014 20:18:33.476402  421087 system_pods.go:89] "storage-provisioner" [3623d0c1-d2f8-45e3-b0f3-d09008f2fbc8] Running
	I1014 20:18:33.476425  421087 retry.go:31] will retry after 920.517462ms: missing components: kube-dns
	I1014 20:18:34.401821  421087 system_pods.go:86] 7 kube-system pods found
	I1014 20:18:34.401853  421087 system_pods.go:89] "coredns-66bc5c9577-t5q7c" [fd900037-32a6-4811-8a8c-134147706022] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1014 20:18:34.401859  421087 system_pods.go:89] "etcd-flannel-880673" [7a1195b6-1bf1-40a7-9aa9-cc2d5dafe2b9] Running
	I1014 20:18:34.401866  421087 system_pods.go:89] "kube-apiserver-flannel-880673" [2ab1b350-0f52-49dd-8282-546cd96104f0] Running
	I1014 20:18:34.401870  421087 system_pods.go:89] "kube-controller-manager-flannel-880673" [77718da6-91bf-4c39-936e-5acc7b6afb65] Running
	I1014 20:18:34.401873  421087 system_pods.go:89] "kube-proxy-js9r5" [f076c7d6-4d12-413f-bc46-f144270c72b2] Running
	I1014 20:18:34.401876  421087 system_pods.go:89] "kube-scheduler-flannel-880673" [3b779757-7765-4ed7-a710-794d57550f16] Running
	I1014 20:18:34.401879  421087 system_pods.go:89] "storage-provisioner" [3623d0c1-d2f8-45e3-b0f3-d09008f2fbc8] Running
	I1014 20:18:34.401896  421087 retry.go:31] will retry after 1.443477712s: missing components: kube-dns
	I1014 20:18:33.145737  421402 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 20:18:33.646138  421402 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 20:18:34.145926  421402 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 20:18:34.646210  421402 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 20:18:35.146099  421402 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 20:18:35.646366  421402 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 20:18:36.145624  421402 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 20:18:36.646172  421402 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 20:18:37.146399  421402 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 20:18:37.255131  421402 kubeadm.go:1113] duration metric: took 4.755429786s to wait for elevateKubeSystemPrivileges
	I1014 20:18:37.255172  421402 kubeadm.go:402] duration metric: took 17.763344151s to StartCluster
	I1014 20:18:37.255194  421402 settings.go:142] acquiring lock: {Name:mkb488b5c777750ffd68a70b951fb5c68c216ad7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 20:18:37.255288  421402 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21409-364627/kubeconfig
	I1014 20:18:37.257778  421402 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-364627/kubeconfig: {Name:mkd77480770143eefcec102bea219ed6716fd3f5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 20:18:37.258126  421402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1014 20:18:37.258144  421402 start.go:235] Will wait 15m0s for node &{Name: IP:192.168.61.105 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1014 20:18:37.258219  421402 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1014 20:18:37.258385  421402 addons.go:69] Setting storage-provisioner=true in profile "bridge-880673"
	I1014 20:18:37.258402  421402 addons.go:238] Setting addon storage-provisioner=true in "bridge-880673"
	I1014 20:18:37.258401  421402 config.go:182] Loaded profile config "bridge-880673": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1014 20:18:37.258434  421402 addons.go:69] Setting default-storageclass=true in profile "bridge-880673"
	I1014 20:18:37.258473  421402 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "bridge-880673"
	I1014 20:18:37.258445  421402 host.go:66] Checking if "bridge-880673" exists ...
	I1014 20:18:37.258940  421402 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 20:18:37.258978  421402 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 20:18:37.258992  421402 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 20:18:37.259022  421402 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 20:18:37.260006  421402 out.go:179] * Verifying Kubernetes components...
	I1014 20:18:37.261413  421402 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 20:18:37.274681  421402 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36923
	I1014 20:18:37.275247  421402 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39059
	I1014 20:18:37.275437  421402 main.go:141] libmachine: () Calling .GetVersion
	I1014 20:18:37.275840  421402 main.go:141] libmachine: () Calling .GetVersion
	I1014 20:18:37.276003  421402 main.go:141] libmachine: Using API Version  1
	I1014 20:18:37.276033  421402 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 20:18:37.276388  421402 main.go:141] libmachine: Using API Version  1
	I1014 20:18:37.276413  421402 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 20:18:37.276469  421402 main.go:141] libmachine: () Calling .GetMachineName
	I1014 20:18:37.276761  421402 main.go:141] libmachine: () Calling .GetMachineName
	I1014 20:18:37.276951  421402 main.go:141] libmachine: (bridge-880673) Calling .GetState
	I1014 20:18:37.277090  421402 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 20:18:37.277137  421402 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 20:18:37.282107  421402 addons.go:238] Setting addon default-storageclass=true in "bridge-880673"
	I1014 20:18:37.282166  421402 host.go:66] Checking if "bridge-880673" exists ...
	I1014 20:18:37.282652  421402 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 20:18:37.282708  421402 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 20:18:37.293798  421402 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35031
	I1014 20:18:37.294403  421402 main.go:141] libmachine: () Calling .GetVersion
	I1014 20:18:37.294981  421402 main.go:141] libmachine: Using API Version  1
	I1014 20:18:37.295009  421402 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 20:18:37.295443  421402 main.go:141] libmachine: () Calling .GetMachineName
	I1014 20:18:37.295715  421402 main.go:141] libmachine: (bridge-880673) Calling .GetState
	I1014 20:18:37.298299  421402 main.go:141] libmachine: (bridge-880673) Calling .DriverName
	I1014 20:18:37.298944  421402 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34633
	I1014 20:18:37.299497  421402 main.go:141] libmachine: () Calling .GetVersion
	I1014 20:18:37.300140  421402 main.go:141] libmachine: Using API Version  1
	I1014 20:18:37.300174  421402 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 20:18:37.300461  421402 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1014 20:18:37.301140  421402 main.go:141] libmachine: () Calling .GetMachineName
	I1014 20:18:37.301703  421402 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 20:18:37.301740  421402 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1014 20:18:37.301757  421402 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 20:18:37.301767  421402 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1014 20:18:37.301796  421402 main.go:141] libmachine: (bridge-880673) Calling .GetSSHHostname
	I1014 20:18:37.306325  421402 main.go:141] libmachine: (bridge-880673) DBG | domain bridge-880673 has defined MAC address 52:54:00:21:00:20 in network mk-bridge-880673
	I1014 20:18:37.306981  421402 main.go:141] libmachine: (bridge-880673) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:00:20", ip: ""} in network mk-bridge-880673: {Iface:virbr3 ExpiryTime:2025-10-14 21:18:07 +0000 UTC Type:0 Mac:52:54:00:21:00:20 Iaid: IPaddr:192.168.61.105 Prefix:24 Hostname:bridge-880673 Clientid:01:52:54:00:21:00:20}
	I1014 20:18:37.307019  421402 main.go:141] libmachine: (bridge-880673) DBG | domain bridge-880673 has defined IP address 192.168.61.105 and MAC address 52:54:00:21:00:20 in network mk-bridge-880673
	I1014 20:18:37.307281  421402 main.go:141] libmachine: (bridge-880673) Calling .GetSSHPort
	I1014 20:18:37.307528  421402 main.go:141] libmachine: (bridge-880673) Calling .GetSSHKeyPath
	I1014 20:18:37.307752  421402 main.go:141] libmachine: (bridge-880673) Calling .GetSSHUsername
	I1014 20:18:37.307934  421402 sshutil.go:53] new ssh client: &{IP:192.168.61.105 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21409-364627/.minikube/machines/bridge-880673/id_rsa Username:docker}
	I1014 20:18:37.318257  421402 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42107
	I1014 20:18:37.319117  421402 main.go:141] libmachine: () Calling .GetVersion
	I1014 20:18:37.319763  421402 main.go:141] libmachine: Using API Version  1
	I1014 20:18:37.319794  421402 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 20:18:37.320309  421402 main.go:141] libmachine: () Calling .GetMachineName
	I1014 20:18:37.320587  421402 main.go:141] libmachine: (bridge-880673) Calling .GetState
	I1014 20:18:37.323181  421402 main.go:141] libmachine: (bridge-880673) Calling .DriverName
	I1014 20:18:37.323508  421402 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1014 20:18:37.323528  421402 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1014 20:18:37.323565  421402 main.go:141] libmachine: (bridge-880673) Calling .GetSSHHostname
	I1014 20:18:37.327622  421402 main.go:141] libmachine: (bridge-880673) DBG | domain bridge-880673 has defined MAC address 52:54:00:21:00:20 in network mk-bridge-880673
	I1014 20:18:37.328225  421402 main.go:141] libmachine: (bridge-880673) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:00:20", ip: ""} in network mk-bridge-880673: {Iface:virbr3 ExpiryTime:2025-10-14 21:18:07 +0000 UTC Type:0 Mac:52:54:00:21:00:20 Iaid: IPaddr:192.168.61.105 Prefix:24 Hostname:bridge-880673 Clientid:01:52:54:00:21:00:20}
	I1014 20:18:37.328299  421402 main.go:141] libmachine: (bridge-880673) DBG | domain bridge-880673 has defined IP address 192.168.61.105 and MAC address 52:54:00:21:00:20 in network mk-bridge-880673
	I1014 20:18:37.328692  421402 main.go:141] libmachine: (bridge-880673) Calling .GetSSHPort
	I1014 20:18:37.328921  421402 main.go:141] libmachine: (bridge-880673) Calling .GetSSHKeyPath
	I1014 20:18:37.329180  421402 main.go:141] libmachine: (bridge-880673) Calling .GetSSHUsername
	I1014 20:18:37.329376  421402 sshutil.go:53] new ssh client: &{IP:192.168.61.105 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21409-364627/.minikube/machines/bridge-880673/id_rsa Username:docker}
	I1014 20:18:37.569065  421402 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.61.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1014 20:18:37.673615  421402 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1014 20:18:37.979883  421402 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1014 20:18:38.035566  421402 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1014 20:18:38.581036  421402 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.61.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.011923093s)
	I1014 20:18:38.581064  421402 start.go:976] {"host.minikube.internal": 192.168.61.1} host record injected into CoreDNS's ConfigMap
	I1014 20:18:38.582891  421402 node_ready.go:35] waiting up to 15m0s for node "bridge-880673" to be "Ready" ...
	I1014 20:18:38.606013  421402 node_ready.go:49] node "bridge-880673" is "Ready"
	I1014 20:18:38.606049  421402 node_ready.go:38] duration metric: took 23.095286ms for node "bridge-880673" to be "Ready" ...
	I1014 20:18:38.606063  421402 api_server.go:52] waiting for apiserver process to appear ...
	I1014 20:18:38.606116  421402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 20:18:39.093412  421402 kapi.go:214] "coredns" deployment in "kube-system" namespace and "bridge-880673" context rescaled to 1 replicas
	I1014 20:18:39.219229  421402 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.18362169s)
	I1014 20:18:39.219308  421402 main.go:141] libmachine: Making call to close driver server
	I1014 20:18:39.219340  421402 api_server.go:72] duration metric: took 1.961152737s to wait for apiserver process to appear ...
	I1014 20:18:39.219366  421402 api_server.go:88] waiting for apiserver healthz status ...
	I1014 20:18:39.219370  421402 main.go:141] libmachine: (bridge-880673) Calling .Close
	I1014 20:18:39.219390  421402 api_server.go:253] Checking apiserver healthz at https://192.168.61.105:8443/healthz ...
	I1014 20:18:39.219437  421402 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.239515217s)
	I1014 20:18:39.219485  421402 main.go:141] libmachine: Making call to close driver server
	I1014 20:18:39.219497  421402 main.go:141] libmachine: (bridge-880673) Calling .Close
	I1014 20:18:39.219763  421402 main.go:141] libmachine: Successfully made call to close driver server
	I1014 20:18:39.219781  421402 main.go:141] libmachine: Making call to close connection to plugin binary
	I1014 20:18:39.219790  421402 main.go:141] libmachine: Making call to close driver server
	I1014 20:18:39.219792  421402 main.go:141] libmachine: Successfully made call to close driver server
	I1014 20:18:39.219798  421402 main.go:141] libmachine: (bridge-880673) Calling .Close
	I1014 20:18:39.219802  421402 main.go:141] libmachine: Making call to close connection to plugin binary
	I1014 20:18:39.219810  421402 main.go:141] libmachine: Making call to close driver server
	I1014 20:18:39.219816  421402 main.go:141] libmachine: (bridge-880673) Calling .Close
	I1014 20:18:39.219763  421402 main.go:141] libmachine: (bridge-880673) DBG | Closing plugin on server side
	I1014 20:18:39.220181  421402 main.go:141] libmachine: (bridge-880673) DBG | Closing plugin on server side
	I1014 20:18:39.220216  421402 main.go:141] libmachine: Successfully made call to close driver server
	I1014 20:18:39.220223  421402 main.go:141] libmachine: Making call to close connection to plugin binary
	I1014 20:18:39.220501  421402 main.go:141] libmachine: (bridge-880673) DBG | Closing plugin on server side
	I1014 20:18:39.220528  421402 main.go:141] libmachine: Successfully made call to close driver server
	I1014 20:18:39.220536  421402 main.go:141] libmachine: Making call to close connection to plugin binary
	I1014 20:18:39.232972  421402 api_server.go:279] https://192.168.61.105:8443/healthz returned 200:
	ok
	I1014 20:18:39.234990  421402 api_server.go:141] control plane version: v1.34.1
	I1014 20:18:39.235027  421402 api_server.go:131] duration metric: took 15.652258ms to wait for apiserver health ...
	I1014 20:18:39.235040  421402 system_pods.go:43] waiting for kube-system pods to appear ...
	I1014 20:18:39.242166  421402 system_pods.go:59] 8 kube-system pods found
	I1014 20:18:39.242220  421402 system_pods.go:61] "coredns-66bc5c9577-8b8hg" [70f51c62-064a-4e6c-961a-da0757f26ece] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1014 20:18:39.242234  421402 system_pods.go:61] "coredns-66bc5c9577-z9sbn" [941347c6-aa6a-4d96-b98c-abb8b48702c5] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1014 20:18:39.242243  421402 system_pods.go:61] "etcd-bridge-880673" [b0b12648-e276-47be-a4f1-6ddbd23fb520] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1014 20:18:39.242264  421402 system_pods.go:61] "kube-apiserver-bridge-880673" [736ac614-7af4-4c45-b48b-6f1f4de0d65c] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1014 20:18:39.242272  421402 system_pods.go:61] "kube-controller-manager-bridge-880673" [15158151-826b-4926-8d0f-79f63637d077] Running
	I1014 20:18:39.242284  421402 system_pods.go:61] "kube-proxy-b2vwp" [c5f3d981-d7da-4fbe-8cc3-603f7ee70a2f] Running
	I1014 20:18:39.242289  421402 system_pods.go:61] "kube-scheduler-bridge-880673" [7655ca5b-fd91-4b72-a132-99da3263baef] Running
	I1014 20:18:39.242297  421402 system_pods.go:61] "storage-provisioner" [436f99c1-6baf-41df-992f-a68144437bef] Pending
	I1014 20:18:39.242306  421402 system_pods.go:74] duration metric: took 7.258272ms to wait for pod list to return data ...
	I1014 20:18:39.242331  421402 default_sa.go:34] waiting for default service account to be created ...
	I1014 20:18:39.248981  421402 main.go:141] libmachine: Making call to close driver server
	I1014 20:18:39.249001  421402 main.go:141] libmachine: (bridge-880673) Calling .Close
	I1014 20:18:39.249369  421402 main.go:141] libmachine: Successfully made call to close driver server
	I1014 20:18:39.249392  421402 main.go:141] libmachine: Making call to close connection to plugin binary
	I1014 20:18:39.249373  421402 main.go:141] libmachine: (bridge-880673) DBG | Closing plugin on server side
	I1014 20:18:39.250981  421402 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1014 20:18:35.850842  421087 system_pods.go:86] 7 kube-system pods found
	I1014 20:18:35.850895  421087 system_pods.go:89] "coredns-66bc5c9577-t5q7c" [fd900037-32a6-4811-8a8c-134147706022] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1014 20:18:35.850906  421087 system_pods.go:89] "etcd-flannel-880673" [7a1195b6-1bf1-40a7-9aa9-cc2d5dafe2b9] Running
	I1014 20:18:35.850917  421087 system_pods.go:89] "kube-apiserver-flannel-880673" [2ab1b350-0f52-49dd-8282-546cd96104f0] Running
	I1014 20:18:35.850933  421087 system_pods.go:89] "kube-controller-manager-flannel-880673" [77718da6-91bf-4c39-936e-5acc7b6afb65] Running
	I1014 20:18:35.850945  421087 system_pods.go:89] "kube-proxy-js9r5" [f076c7d6-4d12-413f-bc46-f144270c72b2] Running
	I1014 20:18:35.850950  421087 system_pods.go:89] "kube-scheduler-flannel-880673" [3b779757-7765-4ed7-a710-794d57550f16] Running
	I1014 20:18:35.850955  421087 system_pods.go:89] "storage-provisioner" [3623d0c1-d2f8-45e3-b0f3-d09008f2fbc8] Running
	I1014 20:18:35.850981  421087 retry.go:31] will retry after 1.11930574s: missing components: kube-dns
	I1014 20:18:36.975755  421087 system_pods.go:86] 7 kube-system pods found
	I1014 20:18:36.975789  421087 system_pods.go:89] "coredns-66bc5c9577-t5q7c" [fd900037-32a6-4811-8a8c-134147706022] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1014 20:18:36.975795  421087 system_pods.go:89] "etcd-flannel-880673" [7a1195b6-1bf1-40a7-9aa9-cc2d5dafe2b9] Running
	I1014 20:18:36.975802  421087 system_pods.go:89] "kube-apiserver-flannel-880673" [2ab1b350-0f52-49dd-8282-546cd96104f0] Running
	I1014 20:18:36.975805  421087 system_pods.go:89] "kube-controller-manager-flannel-880673" [77718da6-91bf-4c39-936e-5acc7b6afb65] Running
	I1014 20:18:36.975809  421087 system_pods.go:89] "kube-proxy-js9r5" [f076c7d6-4d12-413f-bc46-f144270c72b2] Running
	I1014 20:18:36.975812  421087 system_pods.go:89] "kube-scheduler-flannel-880673" [3b779757-7765-4ed7-a710-794d57550f16] Running
	I1014 20:18:36.975815  421087 system_pods.go:89] "storage-provisioner" [3623d0c1-d2f8-45e3-b0f3-d09008f2fbc8] Running
	I1014 20:18:36.975830  421087 retry.go:31] will retry after 1.548344288s: missing components: kube-dns
	I1014 20:18:38.531860  421087 system_pods.go:86] 7 kube-system pods found
	I1014 20:18:38.531901  421087 system_pods.go:89] "coredns-66bc5c9577-t5q7c" [fd900037-32a6-4811-8a8c-134147706022] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1014 20:18:38.531909  421087 system_pods.go:89] "etcd-flannel-880673" [7a1195b6-1bf1-40a7-9aa9-cc2d5dafe2b9] Running
	I1014 20:18:38.531917  421087 system_pods.go:89] "kube-apiserver-flannel-880673" [2ab1b350-0f52-49dd-8282-546cd96104f0] Running
	I1014 20:18:38.531924  421087 system_pods.go:89] "kube-controller-manager-flannel-880673" [77718da6-91bf-4c39-936e-5acc7b6afb65] Running
	I1014 20:18:38.531930  421087 system_pods.go:89] "kube-proxy-js9r5" [f076c7d6-4d12-413f-bc46-f144270c72b2] Running
	I1014 20:18:38.531935  421087 system_pods.go:89] "kube-scheduler-flannel-880673" [3b779757-7765-4ed7-a710-794d57550f16] Running
	I1014 20:18:38.531939  421087 system_pods.go:89] "storage-provisioner" [3623d0c1-d2f8-45e3-b0f3-d09008f2fbc8] Running
	I1014 20:18:38.531961  421087 retry.go:31] will retry after 2.303983878s: missing components: kube-dns
	I1014 20:18:39.252229  421402 addons.go:514] duration metric: took 1.994016723s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1014 20:18:39.252625  421402 default_sa.go:45] found service account: "default"
	I1014 20:18:39.252645  421402 default_sa.go:55] duration metric: took 10.306166ms for default service account to be created ...
	I1014 20:18:39.252653  421402 system_pods.go:116] waiting for k8s-apps to be running ...
	I1014 20:18:39.256521  421402 system_pods.go:86] 8 kube-system pods found
	I1014 20:18:39.256549  421402 system_pods.go:89] "coredns-66bc5c9577-8b8hg" [70f51c62-064a-4e6c-961a-da0757f26ece] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1014 20:18:39.256555  421402 system_pods.go:89] "coredns-66bc5c9577-z9sbn" [941347c6-aa6a-4d96-b98c-abb8b48702c5] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1014 20:18:39.256563  421402 system_pods.go:89] "etcd-bridge-880673" [b0b12648-e276-47be-a4f1-6ddbd23fb520] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1014 20:18:39.256568  421402 system_pods.go:89] "kube-apiserver-bridge-880673" [736ac614-7af4-4c45-b48b-6f1f4de0d65c] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1014 20:18:39.256572  421402 system_pods.go:89] "kube-controller-manager-bridge-880673" [15158151-826b-4926-8d0f-79f63637d077] Running
	I1014 20:18:39.256576  421402 system_pods.go:89] "kube-proxy-b2vwp" [c5f3d981-d7da-4fbe-8cc3-603f7ee70a2f] Running
	I1014 20:18:39.256579  421402 system_pods.go:89] "kube-scheduler-bridge-880673" [7655ca5b-fd91-4b72-a132-99da3263baef] Running
	I1014 20:18:39.256583  421402 system_pods.go:89] "storage-provisioner" [436f99c1-6baf-41df-992f-a68144437bef] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1014 20:18:39.256590  421402 system_pods.go:126] duration metric: took 3.932245ms to wait for k8s-apps to be running ...
	I1014 20:18:39.256599  421402 system_svc.go:44] waiting for kubelet service to be running ....
	I1014 20:18:39.256645  421402 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1014 20:18:39.282073  421402 system_svc.go:56] duration metric: took 25.461591ms WaitForService to wait for kubelet
	I1014 20:18:39.282105  421402 kubeadm.go:586] duration metric: took 2.023923097s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1014 20:18:39.282124  421402 node_conditions.go:102] verifying NodePressure condition ...
	I1014 20:18:39.286484  421402 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1014 20:18:39.286511  421402 node_conditions.go:123] node cpu capacity is 2
	I1014 20:18:39.286525  421402 node_conditions.go:105] duration metric: took 4.396181ms to run NodePressure ...
	I1014 20:18:39.286536  421402 start.go:241] waiting for startup goroutines ...
	I1014 20:18:39.286542  421402 start.go:246] waiting for cluster config update ...
	I1014 20:18:39.286553  421402 start.go:255] writing updated cluster config ...
	I1014 20:18:39.286810  421402 ssh_runner.go:195] Run: rm -f paused
	I1014 20:18:39.293433  421402 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1014 20:18:39.297504  421402 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-8b8hg" in "kube-system" namespace to be "Ready" or be gone ...
	W1014 20:18:41.305400  421402 pod_ready.go:104] pod "coredns-66bc5c9577-8b8hg" is not "Ready", error: <nil>
	I1014 20:18:40.840752  421087 system_pods.go:86] 7 kube-system pods found
	I1014 20:18:40.840790  421087 system_pods.go:89] "coredns-66bc5c9577-t5q7c" [fd900037-32a6-4811-8a8c-134147706022] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1014 20:18:40.840797  421087 system_pods.go:89] "etcd-flannel-880673" [7a1195b6-1bf1-40a7-9aa9-cc2d5dafe2b9] Running
	I1014 20:18:40.840804  421087 system_pods.go:89] "kube-apiserver-flannel-880673" [2ab1b350-0f52-49dd-8282-546cd96104f0] Running
	I1014 20:18:40.840844  421087 system_pods.go:89] "kube-controller-manager-flannel-880673" [77718da6-91bf-4c39-936e-5acc7b6afb65] Running
	I1014 20:18:40.840854  421087 system_pods.go:89] "kube-proxy-js9r5" [f076c7d6-4d12-413f-bc46-f144270c72b2] Running
	I1014 20:18:40.840859  421087 system_pods.go:89] "kube-scheduler-flannel-880673" [3b779757-7765-4ed7-a710-794d57550f16] Running
	I1014 20:18:40.840862  421087 system_pods.go:89] "storage-provisioner" [3623d0c1-d2f8-45e3-b0f3-d09008f2fbc8] Running
	I1014 20:18:40.840878  421087 retry.go:31] will retry after 3.033191594s: missing components: kube-dns
	I1014 20:18:43.880195  421087 system_pods.go:86] 7 kube-system pods found
	I1014 20:18:43.880247  421087 system_pods.go:89] "coredns-66bc5c9577-t5q7c" [fd900037-32a6-4811-8a8c-134147706022] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1014 20:18:43.880259  421087 system_pods.go:89] "etcd-flannel-880673" [7a1195b6-1bf1-40a7-9aa9-cc2d5dafe2b9] Running
	I1014 20:18:43.880268  421087 system_pods.go:89] "kube-apiserver-flannel-880673" [2ab1b350-0f52-49dd-8282-546cd96104f0] Running
	I1014 20:18:43.880274  421087 system_pods.go:89] "kube-controller-manager-flannel-880673" [77718da6-91bf-4c39-936e-5acc7b6afb65] Running
	I1014 20:18:43.880281  421087 system_pods.go:89] "kube-proxy-js9r5" [f076c7d6-4d12-413f-bc46-f144270c72b2] Running
	I1014 20:18:43.880287  421087 system_pods.go:89] "kube-scheduler-flannel-880673" [3b779757-7765-4ed7-a710-794d57550f16] Running
	I1014 20:18:43.880293  421087 system_pods.go:89] "storage-provisioner" [3623d0c1-d2f8-45e3-b0f3-d09008f2fbc8] Running
	I1014 20:18:43.880340  421087 retry.go:31] will retry after 3.158409259s: missing components: kube-dns
	W1014 20:18:43.306042  421402 pod_ready.go:104] pod "coredns-66bc5c9577-8b8hg" is not "Ready", error: <nil>
	W1014 20:18:45.806697  421402 pod_ready.go:104] pod "coredns-66bc5c9577-8b8hg" is not "Ready", error: <nil>
	W1014 20:18:47.807458  421402 pod_ready.go:104] pod "coredns-66bc5c9577-8b8hg" is not "Ready", error: <nil>
	I1014 20:18:47.043989  421087 system_pods.go:86] 7 kube-system pods found
	I1014 20:18:47.044031  421087 system_pods.go:89] "coredns-66bc5c9577-t5q7c" [fd900037-32a6-4811-8a8c-134147706022] Running
	I1014 20:18:47.044040  421087 system_pods.go:89] "etcd-flannel-880673" [7a1195b6-1bf1-40a7-9aa9-cc2d5dafe2b9] Running
	I1014 20:18:47.044047  421087 system_pods.go:89] "kube-apiserver-flannel-880673" [2ab1b350-0f52-49dd-8282-546cd96104f0] Running
	I1014 20:18:47.044055  421087 system_pods.go:89] "kube-controller-manager-flannel-880673" [77718da6-91bf-4c39-936e-5acc7b6afb65] Running
	I1014 20:18:47.044063  421087 system_pods.go:89] "kube-proxy-js9r5" [f076c7d6-4d12-413f-bc46-f144270c72b2] Running
	I1014 20:18:47.044068  421087 system_pods.go:89] "kube-scheduler-flannel-880673" [3b779757-7765-4ed7-a710-794d57550f16] Running
	I1014 20:18:47.044073  421087 system_pods.go:89] "storage-provisioner" [3623d0c1-d2f8-45e3-b0f3-d09008f2fbc8] Running
	I1014 20:18:47.044086  421087 system_pods.go:126] duration metric: took 16.677042702s to wait for k8s-apps to be running ...
	I1014 20:18:47.044103  421087 system_svc.go:44] waiting for kubelet service to be running ....
	I1014 20:18:47.044166  421087 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1014 20:18:47.065113  421087 system_svc.go:56] duration metric: took 20.994672ms WaitForService to wait for kubelet
	I1014 20:18:47.065149  421087 kubeadm.go:586] duration metric: took 30.60819358s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1014 20:18:47.065168  421087 node_conditions.go:102] verifying NodePressure condition ...
	I1014 20:18:47.069360  421087 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1014 20:18:47.069389  421087 node_conditions.go:123] node cpu capacity is 2
	I1014 20:18:47.069403  421087 node_conditions.go:105] duration metric: took 4.230142ms to run NodePressure ...
	I1014 20:18:47.069415  421087 start.go:241] waiting for startup goroutines ...
	I1014 20:18:47.069422  421087 start.go:246] waiting for cluster config update ...
	I1014 20:18:47.069432  421087 start.go:255] writing updated cluster config ...
	I1014 20:18:47.069740  421087 ssh_runner.go:195] Run: rm -f paused
	I1014 20:18:47.075122  421087 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1014 20:18:47.081609  421087 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-t5q7c" in "kube-system" namespace to be "Ready" or be gone ...
	I1014 20:18:47.089859  421087 pod_ready.go:94] pod "coredns-66bc5c9577-t5q7c" is "Ready"
	I1014 20:18:47.089897  421087 pod_ready.go:86] duration metric: took 8.262276ms for pod "coredns-66bc5c9577-t5q7c" in "kube-system" namespace to be "Ready" or be gone ...
	I1014 20:18:47.092266  421087 pod_ready.go:83] waiting for pod "etcd-flannel-880673" in "kube-system" namespace to be "Ready" or be gone ...
	I1014 20:18:47.096766  421087 pod_ready.go:94] pod "etcd-flannel-880673" is "Ready"
	I1014 20:18:47.096790  421087 pod_ready.go:86] duration metric: took 4.49984ms for pod "etcd-flannel-880673" in "kube-system" namespace to be "Ready" or be gone ...
	I1014 20:18:47.098892  421087 pod_ready.go:83] waiting for pod "kube-apiserver-flannel-880673" in "kube-system" namespace to be "Ready" or be gone ...
	I1014 20:18:47.104135  421087 pod_ready.go:94] pod "kube-apiserver-flannel-880673" is "Ready"
	I1014 20:18:47.104168  421087 pod_ready.go:86] duration metric: took 5.248988ms for pod "kube-apiserver-flannel-880673" in "kube-system" namespace to be "Ready" or be gone ...
	I1014 20:18:47.107825  421087 pod_ready.go:83] waiting for pod "kube-controller-manager-flannel-880673" in "kube-system" namespace to be "Ready" or be gone ...
	I1014 20:18:47.481377  421087 pod_ready.go:94] pod "kube-controller-manager-flannel-880673" is "Ready"
	I1014 20:18:47.481408  421087 pod_ready.go:86] duration metric: took 373.557889ms for pod "kube-controller-manager-flannel-880673" in "kube-system" namespace to be "Ready" or be gone ...
	I1014 20:18:47.680640  421087 pod_ready.go:83] waiting for pod "kube-proxy-js9r5" in "kube-system" namespace to be "Ready" or be gone ...
	I1014 20:18:48.082711  421087 pod_ready.go:94] pod "kube-proxy-js9r5" is "Ready"
	I1014 20:18:48.082746  421087 pod_ready.go:86] duration metric: took 402.071673ms for pod "kube-proxy-js9r5" in "kube-system" namespace to be "Ready" or be gone ...
	I1014 20:18:48.280325  421087 pod_ready.go:83] waiting for pod "kube-scheduler-flannel-880673" in "kube-system" namespace to be "Ready" or be gone ...
	I1014 20:18:48.681055  421087 pod_ready.go:94] pod "kube-scheduler-flannel-880673" is "Ready"
	I1014 20:18:48.681090  421087 pod_ready.go:86] duration metric: took 400.726895ms for pod "kube-scheduler-flannel-880673" in "kube-system" namespace to be "Ready" or be gone ...
	I1014 20:18:48.681105  421087 pod_ready.go:40] duration metric: took 1.605941594s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1014 20:18:48.731111  421087 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1014 20:18:48.733952  421087 out.go:179] * Done! kubectl is now configured to use "flannel-880673" cluster and "default" namespace by default
	I1014 20:18:50.300575  421402 pod_ready.go:99] pod "coredns-66bc5c9577-8b8hg" in "kube-system" namespace is gone: getting pod "coredns-66bc5c9577-8b8hg" in "kube-system" namespace (will retry): pods "coredns-66bc5c9577-8b8hg" not found
	I1014 20:18:50.300611  421402 pod_ready.go:86] duration metric: took 11.003081713s for pod "coredns-66bc5c9577-8b8hg" in "kube-system" namespace to be "Ready" or be gone ...
	I1014 20:18:50.300627  421402 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-z9sbn" in "kube-system" namespace to be "Ready" or be gone ...
	W1014 20:18:52.306596  421402 pod_ready.go:104] pod "coredns-66bc5c9577-z9sbn" is not "Ready", error: <nil>
	W1014 20:18:54.306681  421402 pod_ready.go:104] pod "coredns-66bc5c9577-z9sbn" is not "Ready", error: <nil>
	W1014 20:18:56.308060  421402 pod_ready.go:104] pod "coredns-66bc5c9577-z9sbn" is not "Ready", error: <nil>
	W1014 20:18:58.308167  421402 pod_ready.go:104] pod "coredns-66bc5c9577-z9sbn" is not "Ready", error: <nil>
	W1014 20:19:00.309255  421402 pod_ready.go:104] pod "coredns-66bc5c9577-z9sbn" is not "Ready", error: <nil>
	W1014 20:19:02.806751  421402 pod_ready.go:104] pod "coredns-66bc5c9577-z9sbn" is not "Ready", error: <nil>
	W1014 20:19:04.807617  421402 pod_ready.go:104] pod "coredns-66bc5c9577-z9sbn" is not "Ready", error: <nil>
	W1014 20:19:06.808157  421402 pod_ready.go:104] pod "coredns-66bc5c9577-z9sbn" is not "Ready", error: <nil>
	W1014 20:19:08.808270  421402 pod_ready.go:104] pod "coredns-66bc5c9577-z9sbn" is not "Ready", error: <nil>
	W1014 20:19:11.309833  421402 pod_ready.go:104] pod "coredns-66bc5c9577-z9sbn" is not "Ready", error: <nil>
	W1014 20:19:13.808468  421402 pod_ready.go:104] pod "coredns-66bc5c9577-z9sbn" is not "Ready", error: <nil>
	I1014 20:19:16.307018  421402 pod_ready.go:94] pod "coredns-66bc5c9577-z9sbn" is "Ready"
	I1014 20:19:16.307069  421402 pod_ready.go:86] duration metric: took 26.006434395s for pod "coredns-66bc5c9577-z9sbn" in "kube-system" namespace to be "Ready" or be gone ...
	I1014 20:19:16.310071  421402 pod_ready.go:83] waiting for pod "etcd-bridge-880673" in "kube-system" namespace to be "Ready" or be gone ...
	I1014 20:19:16.314834  421402 pod_ready.go:94] pod "etcd-bridge-880673" is "Ready"
	I1014 20:19:16.314860  421402 pod_ready.go:86] duration metric: took 4.757751ms for pod "etcd-bridge-880673" in "kube-system" namespace to be "Ready" or be gone ...
	I1014 20:19:16.316992  421402 pod_ready.go:83] waiting for pod "kube-apiserver-bridge-880673" in "kube-system" namespace to be "Ready" or be gone ...
	I1014 20:19:16.321169  421402 pod_ready.go:94] pod "kube-apiserver-bridge-880673" is "Ready"
	I1014 20:19:16.321205  421402 pod_ready.go:86] duration metric: took 4.190547ms for pod "kube-apiserver-bridge-880673" in "kube-system" namespace to be "Ready" or be gone ...
	I1014 20:19:16.323265  421402 pod_ready.go:83] waiting for pod "kube-controller-manager-bridge-880673" in "kube-system" namespace to be "Ready" or be gone ...
	I1014 20:19:16.504811  421402 pod_ready.go:94] pod "kube-controller-manager-bridge-880673" is "Ready"
	I1014 20:19:16.504838  421402 pod_ready.go:86] duration metric: took 181.547833ms for pod "kube-controller-manager-bridge-880673" in "kube-system" namespace to be "Ready" or be gone ...
	I1014 20:19:16.704795  421402 pod_ready.go:83] waiting for pod "kube-proxy-b2vwp" in "kube-system" namespace to be "Ready" or be gone ...
	I1014 20:19:17.104667  421402 pod_ready.go:94] pod "kube-proxy-b2vwp" is "Ready"
	I1014 20:19:17.104697  421402 pod_ready.go:86] duration metric: took 399.871111ms for pod "kube-proxy-b2vwp" in "kube-system" namespace to be "Ready" or be gone ...
	I1014 20:19:17.305175  421402 pod_ready.go:83] waiting for pod "kube-scheduler-bridge-880673" in "kube-system" namespace to be "Ready" or be gone ...
	I1014 20:19:17.704034  421402 pod_ready.go:94] pod "kube-scheduler-bridge-880673" is "Ready"
	I1014 20:19:17.704059  421402 pod_ready.go:86] duration metric: took 398.852515ms for pod "kube-scheduler-bridge-880673" in "kube-system" namespace to be "Ready" or be gone ...
	I1014 20:19:17.704072  421402 pod_ready.go:40] duration metric: took 38.410605028s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1014 20:19:17.752777  421402 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1014 20:19:17.754591  421402 out.go:179] * Done! kubectl is now configured to use "bridge-880673" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 14 20:34:20 embed-certs-158674 crio[883]: time="2025-10-14 20:34:20.759884424Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1760474060759833957,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:176956,},InodesUsed:&UInt64Value{Value:65,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=68fbcceb-78ef-4bce-89b6-b5fb75b956c8 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 14 20:34:20 embed-certs-158674 crio[883]: time="2025-10-14 20:34:20.760581851Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=751fbf1d-0cf7-4c04-9e6b-e29a21f91467 name=/runtime.v1.RuntimeService/ListContainers
	Oct 14 20:34:20 embed-certs-158674 crio[883]: time="2025-10-14 20:34:20.760655723Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=751fbf1d-0cf7-4c04-9e6b-e29a21f91467 name=/runtime.v1.RuntimeService/ListContainers
	Oct 14 20:34:20 embed-certs-158674 crio[883]: time="2025-10-14 20:34:20.760979697Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4935f639000407412f4a4395f8cd245e376c2ada7dc374b85173c3b4dd7b339b,PodSandboxId:c6f965189f5cca674456d2c19a018311a12e0fd6c5258e7bc01d33b64fb1d20b,Metadata:&ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:8,},Image:&ImageSpec{Image:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,State:CONTAINER_EXITED,CreatedAt:1760473981429393837,Labels:map[string]string{io.kubernetes.container.name: dashboard-metrics-scraper,io.kubernetes.pod.name: dashboard-metrics-scraper-6ffb444bf9-mz5cm,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 30fef46b-43ef-4af7-b50b-ba3f07a7afde,},Annotations:map[string]string{io.kubernetes.container.hash: c78228a5,io.kubernetes.
container.ports: [{\"containerPort\":8000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 8,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:329ec9ece1ee87ff29d15aa2b946df57d3f4b3baa58d36ea71db147fb16de05e,PodSandboxId:cbee72f0353c9323b8de563661cfc4947729493bed80fb58b73774f6a414531d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1760472991749515369,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c51ba3c5-d480-40c6-80fe-a7b956740e03,},Annotations:map[
string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d94ac69e5f9049b9bfe47fec10578ac2f231a4654689654e974e89467fcb9dcb,PodSandboxId:62896cd3ba6e546c712996b4a3cc63cdaa829335e455030501caa64571b81b8a,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1760472971908454011,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f752530f-0f63-471b-86c5-be4cafc867f8,},Annotations:map[string]string{io.
kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:19cd1852378ae25b0274781540c1f41994de4f1cb92cff554fcccb073b4da86f,PodSandboxId:aceff373a9141797bd735c9ebbf347030abf7e9e92e30d523350db102706a413,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1760472968511607743,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-ct9rr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15723a94-e4e7-4bd3-95bd-f264ebed028b,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf
792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c43842bd6420c9d4f21a272e0491c1c894b300385fb8de9d017693e0cace060f,PodSandboxId:c6b15e6c26fa2d58b7ad8bcbbdf48acfa369f17c35d550af968ebb99696071bb,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1
a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1760472960836694070,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rh6wc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 914eea30-a7ad-442e-a8e2-ae0b47413336,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:003294c62d4e668811ca87e3130bdcfb3915e9bebe89b0c4f4ca79a82b670740,PodSandboxId:cbee72f0353c9323b8de563661cfc4947729493bed80fb58b73774f6a414531d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTA
INER_EXITED,CreatedAt:1760472960791138577,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c51ba3c5-d480-40c6-80fe-a7b956740e03,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c22c86b8a4ca8f03f41fa8b78491678bca1c8bbd07958f6836c9e5f3b0291e3,PodSandboxId:e1eb325c1e8e2388bf40ee0e7b70495b5b57ebdd439c9685ed18d900b5e0e46d,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:17604
72955974434671,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-158674,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ecd5998fe8fd7e24c8e5315c5cb0861b,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5ad28306f0acdab374f3c55d2549350c2887aa8919aad75177b3c04c6c780a4e,PodSandboxId:5c55a53d5d8e4e7829930e67c407e1ea346febb61b502891e58aff661fdb9f38,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7e
aae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1760472955967113869,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-158674,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8f768e14cd16d80179285f991605a4ba,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:760bc07e5c70450d801479d1c3dca4c1f4755d6173b8ade6dce7188b7cf47003,PodSandboxId:adeb9462d402a28252c44a46bf5dcfec10b9228692635ba1599a7e6dbc2f1745,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f
3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1760472955937234543,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-158674,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3b788033deaae79c31f8d0cb65deb34f,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1e686aedaa84ab231d58a6ab25ad7d3c53ad049d73f8339a8b04909f1dc8ced8,PodSandboxId:983272502e832541cb785f2cd1f38db8b2d20ccf383ab4ecd9df2ce933
be0578,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1760472955899422847,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-158674,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5eb45808bf0719072e2077942e6532db,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go
:74" id=751fbf1d-0cf7-4c04-9e6b-e29a21f91467 name=/runtime.v1.RuntimeService/ListContainers
	Oct 14 20:34:20 embed-certs-158674 crio[883]: time="2025-10-14 20:34:20.802892167Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=65f153e2-1338-4df1-ab63-13a066074e9d name=/runtime.v1.RuntimeService/Version
	Oct 14 20:34:20 embed-certs-158674 crio[883]: time="2025-10-14 20:34:20.803070056Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=65f153e2-1338-4df1-ab63-13a066074e9d name=/runtime.v1.RuntimeService/Version
	Oct 14 20:34:20 embed-certs-158674 crio[883]: time="2025-10-14 20:34:20.804114418Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=c786b023-bdb0-487b-9bc7-7d66cd466dc6 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 14 20:34:20 embed-certs-158674 crio[883]: time="2025-10-14 20:34:20.804730279Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1760474060804705575,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:176956,},InodesUsed:&UInt64Value{Value:65,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c786b023-bdb0-487b-9bc7-7d66cd466dc6 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 14 20:34:20 embed-certs-158674 crio[883]: time="2025-10-14 20:34:20.805462485Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7adb6071-b99c-410d-b515-0e9e79e89e73 name=/runtime.v1.RuntimeService/ListContainers
	Oct 14 20:34:20 embed-certs-158674 crio[883]: time="2025-10-14 20:34:20.805641674Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7adb6071-b99c-410d-b515-0e9e79e89e73 name=/runtime.v1.RuntimeService/ListContainers
	Oct 14 20:34:20 embed-certs-158674 crio[883]: time="2025-10-14 20:34:20.805981098Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4935f639000407412f4a4395f8cd245e376c2ada7dc374b85173c3b4dd7b339b,PodSandboxId:c6f965189f5cca674456d2c19a018311a12e0fd6c5258e7bc01d33b64fb1d20b,Metadata:&ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:8,},Image:&ImageSpec{Image:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,State:CONTAINER_EXITED,CreatedAt:1760473981429393837,Labels:map[string]string{io.kubernetes.container.name: dashboard-metrics-scraper,io.kubernetes.pod.name: dashboard-metrics-scraper-6ffb444bf9-mz5cm,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 30fef46b-43ef-4af7-b50b-ba3f07a7afde,},Annotations:map[string]string{io.kubernetes.container.hash: c78228a5,io.kubernetes.
container.ports: [{\"containerPort\":8000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 8,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:329ec9ece1ee87ff29d15aa2b946df57d3f4b3baa58d36ea71db147fb16de05e,PodSandboxId:cbee72f0353c9323b8de563661cfc4947729493bed80fb58b73774f6a414531d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1760472991749515369,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c51ba3c5-d480-40c6-80fe-a7b956740e03,},Annotations:map[
string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d94ac69e5f9049b9bfe47fec10578ac2f231a4654689654e974e89467fcb9dcb,PodSandboxId:62896cd3ba6e546c712996b4a3cc63cdaa829335e455030501caa64571b81b8a,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1760472971908454011,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f752530f-0f63-471b-86c5-be4cafc867f8,},Annotations:map[string]string{io.
kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:19cd1852378ae25b0274781540c1f41994de4f1cb92cff554fcccb073b4da86f,PodSandboxId:aceff373a9141797bd735c9ebbf347030abf7e9e92e30d523350db102706a413,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1760472968511607743,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-ct9rr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15723a94-e4e7-4bd3-95bd-f264ebed028b,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf
792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c43842bd6420c9d4f21a272e0491c1c894b300385fb8de9d017693e0cace060f,PodSandboxId:c6b15e6c26fa2d58b7ad8bcbbdf48acfa369f17c35d550af968ebb99696071bb,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1
a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1760472960836694070,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rh6wc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 914eea30-a7ad-442e-a8e2-ae0b47413336,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:003294c62d4e668811ca87e3130bdcfb3915e9bebe89b0c4f4ca79a82b670740,PodSandboxId:cbee72f0353c9323b8de563661cfc4947729493bed80fb58b73774f6a414531d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTA
INER_EXITED,CreatedAt:1760472960791138577,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c51ba3c5-d480-40c6-80fe-a7b956740e03,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c22c86b8a4ca8f03f41fa8b78491678bca1c8bbd07958f6836c9e5f3b0291e3,PodSandboxId:e1eb325c1e8e2388bf40ee0e7b70495b5b57ebdd439c9685ed18d900b5e0e46d,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:17604
72955974434671,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-158674,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ecd5998fe8fd7e24c8e5315c5cb0861b,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5ad28306f0acdab374f3c55d2549350c2887aa8919aad75177b3c04c6c780a4e,PodSandboxId:5c55a53d5d8e4e7829930e67c407e1ea346febb61b502891e58aff661fdb9f38,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7e
aae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1760472955967113869,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-158674,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8f768e14cd16d80179285f991605a4ba,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:760bc07e5c70450d801479d1c3dca4c1f4755d6173b8ade6dce7188b7cf47003,PodSandboxId:adeb9462d402a28252c44a46bf5dcfec10b9228692635ba1599a7e6dbc2f1745,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f
3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1760472955937234543,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-158674,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3b788033deaae79c31f8d0cb65deb34f,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1e686aedaa84ab231d58a6ab25ad7d3c53ad049d73f8339a8b04909f1dc8ced8,PodSandboxId:983272502e832541cb785f2cd1f38db8b2d20ccf383ab4ecd9df2ce933
be0578,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1760472955899422847,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-158674,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5eb45808bf0719072e2077942e6532db,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go
:74" id=7adb6071-b99c-410d-b515-0e9e79e89e73 name=/runtime.v1.RuntimeService/ListContainers
	Oct 14 20:34:20 embed-certs-158674 crio[883]: time="2025-10-14 20:34:20.843317778Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=7fa4e49c-af68-4d4e-842f-739606029d13 name=/runtime.v1.RuntimeService/Version
	Oct 14 20:34:20 embed-certs-158674 crio[883]: time="2025-10-14 20:34:20.843404760Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=7fa4e49c-af68-4d4e-842f-739606029d13 name=/runtime.v1.RuntimeService/Version
	Oct 14 20:34:20 embed-certs-158674 crio[883]: time="2025-10-14 20:34:20.845575330Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=bf320295-8810-4c41-b3ca-d6320e7d966e name=/runtime.v1.ImageService/ImageFsInfo
	Oct 14 20:34:20 embed-certs-158674 crio[883]: time="2025-10-14 20:34:20.846493290Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1760474060846422613,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:176956,},InodesUsed:&UInt64Value{Value:65,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=bf320295-8810-4c41-b3ca-d6320e7d966e name=/runtime.v1.ImageService/ImageFsInfo
	Oct 14 20:34:20 embed-certs-158674 crio[883]: time="2025-10-14 20:34:20.847330821Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8b5a5e4e-f53e-45f8-a3ca-5b36cbeca560 name=/runtime.v1.RuntimeService/ListContainers
	Oct 14 20:34:20 embed-certs-158674 crio[883]: time="2025-10-14 20:34:20.847387438Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8b5a5e4e-f53e-45f8-a3ca-5b36cbeca560 name=/runtime.v1.RuntimeService/ListContainers
	Oct 14 20:34:20 embed-certs-158674 crio[883]: time="2025-10-14 20:34:20.847611086Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4935f639000407412f4a4395f8cd245e376c2ada7dc374b85173c3b4dd7b339b,PodSandboxId:c6f965189f5cca674456d2c19a018311a12e0fd6c5258e7bc01d33b64fb1d20b,Metadata:&ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:8,},Image:&ImageSpec{Image:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,State:CONTAINER_EXITED,CreatedAt:1760473981429393837,Labels:map[string]string{io.kubernetes.container.name: dashboard-metrics-scraper,io.kubernetes.pod.name: dashboard-metrics-scraper-6ffb444bf9-mz5cm,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 30fef46b-43ef-4af7-b50b-ba3f07a7afde,},Annotations:map[string]string{io.kubernetes.container.hash: c78228a5,io.kubernetes.
container.ports: [{\"containerPort\":8000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 8,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:329ec9ece1ee87ff29d15aa2b946df57d3f4b3baa58d36ea71db147fb16de05e,PodSandboxId:cbee72f0353c9323b8de563661cfc4947729493bed80fb58b73774f6a414531d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1760472991749515369,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c51ba3c5-d480-40c6-80fe-a7b956740e03,},Annotations:map[
string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d94ac69e5f9049b9bfe47fec10578ac2f231a4654689654e974e89467fcb9dcb,PodSandboxId:62896cd3ba6e546c712996b4a3cc63cdaa829335e455030501caa64571b81b8a,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1760472971908454011,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f752530f-0f63-471b-86c5-be4cafc867f8,},Annotations:map[string]string{io.
kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:19cd1852378ae25b0274781540c1f41994de4f1cb92cff554fcccb073b4da86f,PodSandboxId:aceff373a9141797bd735c9ebbf347030abf7e9e92e30d523350db102706a413,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1760472968511607743,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-ct9rr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15723a94-e4e7-4bd3-95bd-f264ebed028b,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf
792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c43842bd6420c9d4f21a272e0491c1c894b300385fb8de9d017693e0cace060f,PodSandboxId:c6b15e6c26fa2d58b7ad8bcbbdf48acfa369f17c35d550af968ebb99696071bb,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1
a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1760472960836694070,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rh6wc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 914eea30-a7ad-442e-a8e2-ae0b47413336,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:003294c62d4e668811ca87e3130bdcfb3915e9bebe89b0c4f4ca79a82b670740,PodSandboxId:cbee72f0353c9323b8de563661cfc4947729493bed80fb58b73774f6a414531d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTA
INER_EXITED,CreatedAt:1760472960791138577,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c51ba3c5-d480-40c6-80fe-a7b956740e03,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c22c86b8a4ca8f03f41fa8b78491678bca1c8bbd07958f6836c9e5f3b0291e3,PodSandboxId:e1eb325c1e8e2388bf40ee0e7b70495b5b57ebdd439c9685ed18d900b5e0e46d,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:17604
72955974434671,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-158674,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ecd5998fe8fd7e24c8e5315c5cb0861b,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5ad28306f0acdab374f3c55d2549350c2887aa8919aad75177b3c04c6c780a4e,PodSandboxId:5c55a53d5d8e4e7829930e67c407e1ea346febb61b502891e58aff661fdb9f38,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7e
aae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1760472955967113869,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-158674,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8f768e14cd16d80179285f991605a4ba,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:760bc07e5c70450d801479d1c3dca4c1f4755d6173b8ade6dce7188b7cf47003,PodSandboxId:adeb9462d402a28252c44a46bf5dcfec10b9228692635ba1599a7e6dbc2f1745,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f
3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1760472955937234543,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-158674,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3b788033deaae79c31f8d0cb65deb34f,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1e686aedaa84ab231d58a6ab25ad7d3c53ad049d73f8339a8b04909f1dc8ced8,PodSandboxId:983272502e832541cb785f2cd1f38db8b2d20ccf383ab4ecd9df2ce933
be0578,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1760472955899422847,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-158674,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5eb45808bf0719072e2077942e6532db,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go
:74" id=8b5a5e4e-f53e-45f8-a3ca-5b36cbeca560 name=/runtime.v1.RuntimeService/ListContainers
	Oct 14 20:34:20 embed-certs-158674 crio[883]: time="2025-10-14 20:34:20.886670480Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=fa49efd9-54ad-4e9f-85b2-daa35acafe7f name=/runtime.v1.RuntimeService/Version
	Oct 14 20:34:20 embed-certs-158674 crio[883]: time="2025-10-14 20:34:20.886758440Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=fa49efd9-54ad-4e9f-85b2-daa35acafe7f name=/runtime.v1.RuntimeService/Version
	Oct 14 20:34:20 embed-certs-158674 crio[883]: time="2025-10-14 20:34:20.888269170Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=4704c394-b90e-4cf5-814a-1efd7f2ccae9 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 14 20:34:20 embed-certs-158674 crio[883]: time="2025-10-14 20:34:20.889192533Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1760474060889123293,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:176956,},InodesUsed:&UInt64Value{Value:65,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=4704c394-b90e-4cf5-814a-1efd7f2ccae9 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 14 20:34:20 embed-certs-158674 crio[883]: time="2025-10-14 20:34:20.890075550Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a6633820-d5c8-45a1-9c77-f1033c272647 name=/runtime.v1.RuntimeService/ListContainers
	Oct 14 20:34:20 embed-certs-158674 crio[883]: time="2025-10-14 20:34:20.890143955Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a6633820-d5c8-45a1-9c77-f1033c272647 name=/runtime.v1.RuntimeService/ListContainers
	Oct 14 20:34:20 embed-certs-158674 crio[883]: time="2025-10-14 20:34:20.890352270Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4935f639000407412f4a4395f8cd245e376c2ada7dc374b85173c3b4dd7b339b,PodSandboxId:c6f965189f5cca674456d2c19a018311a12e0fd6c5258e7bc01d33b64fb1d20b,Metadata:&ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:8,},Image:&ImageSpec{Image:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,State:CONTAINER_EXITED,CreatedAt:1760473981429393837,Labels:map[string]string{io.kubernetes.container.name: dashboard-metrics-scraper,io.kubernetes.pod.name: dashboard-metrics-scraper-6ffb444bf9-mz5cm,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 30fef46b-43ef-4af7-b50b-ba3f07a7afde,},Annotations:map[string]string{io.kubernetes.container.hash: c78228a5,io.kubernetes.
container.ports: [{\"containerPort\":8000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 8,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:329ec9ece1ee87ff29d15aa2b946df57d3f4b3baa58d36ea71db147fb16de05e,PodSandboxId:cbee72f0353c9323b8de563661cfc4947729493bed80fb58b73774f6a414531d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1760472991749515369,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c51ba3c5-d480-40c6-80fe-a7b956740e03,},Annotations:map[
string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d94ac69e5f9049b9bfe47fec10578ac2f231a4654689654e974e89467fcb9dcb,PodSandboxId:62896cd3ba6e546c712996b4a3cc63cdaa829335e455030501caa64571b81b8a,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1760472971908454011,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f752530f-0f63-471b-86c5-be4cafc867f8,},Annotations:map[string]string{io.
kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:19cd1852378ae25b0274781540c1f41994de4f1cb92cff554fcccb073b4da86f,PodSandboxId:aceff373a9141797bd735c9ebbf347030abf7e9e92e30d523350db102706a413,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1760472968511607743,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-ct9rr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15723a94-e4e7-4bd3-95bd-f264ebed028b,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf
792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c43842bd6420c9d4f21a272e0491c1c894b300385fb8de9d017693e0cace060f,PodSandboxId:c6b15e6c26fa2d58b7ad8bcbbdf48acfa369f17c35d550af968ebb99696071bb,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1
a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1760472960836694070,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rh6wc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 914eea30-a7ad-442e-a8e2-ae0b47413336,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:003294c62d4e668811ca87e3130bdcfb3915e9bebe89b0c4f4ca79a82b670740,PodSandboxId:cbee72f0353c9323b8de563661cfc4947729493bed80fb58b73774f6a414531d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTA
INER_EXITED,CreatedAt:1760472960791138577,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c51ba3c5-d480-40c6-80fe-a7b956740e03,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c22c86b8a4ca8f03f41fa8b78491678bca1c8bbd07958f6836c9e5f3b0291e3,PodSandboxId:e1eb325c1e8e2388bf40ee0e7b70495b5b57ebdd439c9685ed18d900b5e0e46d,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:17604
72955974434671,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-158674,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ecd5998fe8fd7e24c8e5315c5cb0861b,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5ad28306f0acdab374f3c55d2549350c2887aa8919aad75177b3c04c6c780a4e,PodSandboxId:5c55a53d5d8e4e7829930e67c407e1ea346febb61b502891e58aff661fdb9f38,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7e
aae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1760472955967113869,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-158674,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8f768e14cd16d80179285f991605a4ba,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:760bc07e5c70450d801479d1c3dca4c1f4755d6173b8ade6dce7188b7cf47003,PodSandboxId:adeb9462d402a28252c44a46bf5dcfec10b9228692635ba1599a7e6dbc2f1745,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f
3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1760472955937234543,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-158674,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3b788033deaae79c31f8d0cb65deb34f,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1e686aedaa84ab231d58a6ab25ad7d3c53ad049d73f8339a8b04909f1dc8ced8,PodSandboxId:983272502e832541cb785f2cd1f38db8b2d20ccf383ab4ecd9df2ce933
be0578,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1760472955899422847,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-158674,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5eb45808bf0719072e2077942e6532db,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go
:74" id=a6633820-d5c8-45a1-9c77-f1033c272647 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                        ATTEMPT             POD ID              POD
	4935f63900040       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                      About a minute ago   Exited              dashboard-metrics-scraper   8                   c6f965189f5cc       dashboard-metrics-scraper-6ffb444bf9-mz5cm
	329ec9ece1ee8       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      17 minutes ago       Running             storage-provisioner         2                   cbee72f0353c9       storage-provisioner
	d94ac69e5f904       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   18 minutes ago       Running             busybox                     1                   62896cd3ba6e5       busybox
	19cd1852378ae       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                      18 minutes ago       Running             coredns                     1                   aceff373a9141       coredns-66bc5c9577-ct9rr
	c43842bd6420c       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                      18 minutes ago       Running             kube-proxy                  1                   c6b15e6c26fa2       kube-proxy-rh6wc
	003294c62d4e6       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      18 minutes ago       Exited              storage-provisioner         1                   cbee72f0353c9       storage-provisioner
	7c22c86b8a4ca       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                      18 minutes ago       Running             etcd                        1                   e1eb325c1e8e2       etcd-embed-certs-158674
	5ad28306f0acd       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                      18 minutes ago       Running             kube-scheduler              1                   5c55a53d5d8e4       kube-scheduler-embed-certs-158674
	760bc07e5c704       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                      18 minutes ago       Running             kube-controller-manager     1                   adeb9462d402a       kube-controller-manager-embed-certs-158674
	1e686aedaa84a       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                      18 minutes ago       Running             kube-apiserver              1                   983272502e832       kube-apiserver-embed-certs-158674
	
	
	==> coredns [19cd1852378ae25b0274781540c1f41994de4f1cb92cff554fcccb073b4da86f] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6e77f21cd6946b87ec86c565e2060aa5d23c02882cb22fd7a321b5e8cd0c8bdafe21968fcff406405707b988b753da21ecd190fe02329f1b569bfa74920cc0fa
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:46602 - 52436 "HINFO IN 2360683752022516549.7876052886551378901. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.09399007s
	
	
	==> describe nodes <==
	Name:               embed-certs-158674
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-158674
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=93e78e130b0f4e4236c23940daf4ba8b68c76662
	                    minikube.k8s.io/name=embed-certs-158674
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_14T20_13_03_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 14 Oct 2025 20:12:59 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-158674
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 14 Oct 2025 20:34:12 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 14 Oct 2025 20:31:59 +0000   Tue, 14 Oct 2025 20:12:58 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 14 Oct 2025 20:31:59 +0000   Tue, 14 Oct 2025 20:12:58 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 14 Oct 2025 20:31:59 +0000   Tue, 14 Oct 2025 20:12:58 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 14 Oct 2025 20:31:59 +0000   Tue, 14 Oct 2025 20:16:10 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.78
	  Hostname:    embed-certs-158674
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3042712Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3042712Ki
	  pods:               110
	System Info:
	  Machine ID:                 fc3d3099291b42e0a6671211eb8f48e0
	  System UUID:                fc3d3099-291b-42e0-a667-1211eb8f48e0
	  Boot ID:                    74f9463f-48fa-45b6-9fc9-8e2fee57a938
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 coredns-66bc5c9577-ct9rr                      100m (5%)     0 (0%)      70Mi (2%)        170Mi (5%)     21m
	  kube-system                 etcd-embed-certs-158674                       100m (5%)     0 (0%)      100Mi (3%)       0 (0%)         21m
	  kube-system                 kube-apiserver-embed-certs-158674             250m (12%)    0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 kube-controller-manager-embed-certs-158674    200m (10%)    0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 kube-proxy-rh6wc                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 kube-scheduler-embed-certs-158674             100m (5%)     0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 metrics-server-746fcd58dc-rbchd               100m (5%)     0 (0%)      200Mi (6%)       0 (0%)         20m
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         21m
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-mz5cm    0 (0%)        0 (0%)      0 (0%)           0 (0%)         18m
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-lhkkm         0 (0%)        0 (0%)      0 (0%)           0 (0%)         18m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   0 (0%)
	  memory             370Mi (12%)  170Mi (5%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 21m                kube-proxy       
	  Normal   Starting                 18m                kube-proxy       
	  Normal   Starting                 21m                kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  21m (x8 over 21m)  kubelet          Node embed-certs-158674 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    21m (x8 over 21m)  kubelet          Node embed-certs-158674 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     21m (x7 over 21m)  kubelet          Node embed-certs-158674 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  21m                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasNoDiskPressure    21m                kubelet          Node embed-certs-158674 status is now: NodeHasNoDiskPressure
	  Normal   NodeAllocatableEnforced  21m                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  21m                kubelet          Node embed-certs-158674 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     21m                kubelet          Node embed-certs-158674 status is now: NodeHasSufficientPID
	  Normal   NodeReady                21m                kubelet          Node embed-certs-158674 status is now: NodeReady
	  Normal   Starting                 21m                kubelet          Starting kubelet.
	  Normal   RegisteredNode           21m                node-controller  Node embed-certs-158674 event: Registered Node embed-certs-158674 in Controller
	  Normal   Starting                 18m                kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  18m (x8 over 18m)  kubelet          Node embed-certs-158674 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    18m (x8 over 18m)  kubelet          Node embed-certs-158674 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     18m (x7 over 18m)  kubelet          Node embed-certs-158674 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  18m                kubelet          Updated Node Allocatable limit across pods
	  Warning  Rebooted                 18m                kubelet          Node embed-certs-158674 has been rebooted, boot id: 74f9463f-48fa-45b6-9fc9-8e2fee57a938
	  Normal   RegisteredNode           18m                node-controller  Node embed-certs-158674 event: Registered Node embed-certs-158674 in Controller
	
	
	==> dmesg <==
	[Oct14 20:15] Booted with the nomodeset parameter. Only the system framebuffer will be available
	[  +0.000011] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.001676] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +0.001133] (rpcbind)[120]: rpcbind.service: Referenced but unset environment variable evaluates to an empty string: RPCBIND_OPTIONS
	[  +0.791244] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000018] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.122025] kauditd_printk_skb: 60 callbacks suppressed
	[  +0.104808] kauditd_printk_skb: 46 callbacks suppressed
	[Oct14 20:16] kauditd_printk_skb: 168 callbacks suppressed
	[  +2.758267] kauditd_printk_skb: 128 callbacks suppressed
	[  +0.000035] kauditd_printk_skb: 149 callbacks suppressed
	[ +18.333556] kauditd_printk_skb: 78 callbacks suppressed
	[ +12.032494] kauditd_printk_skb: 5 callbacks suppressed
	[  +1.120158] kauditd_printk_skb: 32 callbacks suppressed
	[Oct14 20:17] kauditd_printk_skb: 13 callbacks suppressed
	[ +34.932103] kauditd_printk_skb: 6 callbacks suppressed
	[Oct14 20:18] kauditd_printk_skb: 6 callbacks suppressed
	[Oct14 20:19] kauditd_printk_skb: 6 callbacks suppressed
	[Oct14 20:22] kauditd_printk_skb: 6 callbacks suppressed
	[Oct14 20:27] kauditd_printk_skb: 6 callbacks suppressed
	[Oct14 20:33] kauditd_printk_skb: 18 callbacks suppressed
	
	
	==> etcd [7c22c86b8a4ca8f03f41fa8b78491678bca1c8bbd07958f6836c9e5f3b0291e3] <==
	{"level":"info","ts":"2025-10-14T20:16:15.980789Z","caller":"traceutil/trace.go:172","msg":"trace[901854029] range","detail":"{range_begin:/registry/pods/kube-system/kube-proxy-rh6wc; range_end:; response_count:1; response_revision:702; }","duration":"566.400931ms","start":"2025-10-14T20:16:15.414381Z","end":"2025-10-14T20:16:15.980782Z","steps":["trace[901854029] 'agreement among raft nodes before linearized reading'  (duration: 566.066733ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-14T20:16:15.980814Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-10-14T20:16:15.414368Z","time spent":"566.440246ms","remote":"127.0.0.1:56754","response type":"/etcdserverpb.KV/Range","request count":0,"request size":47,"response count":1,"response size":5028,"request content":"key:\"/registry/pods/kube-system/kube-proxy-rh6wc\" limit:1 "}
	{"level":"warn","ts":"2025-10-14T20:16:16.492150Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"394.68349ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-14T20:16:16.492292Z","caller":"traceutil/trace.go:172","msg":"trace[2046295095] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:702; }","duration":"394.764689ms","start":"2025-10-14T20:16:16.097442Z","end":"2025-10-14T20:16:16.492206Z","steps":["trace[2046295095] 'range keys from in-memory index tree'  (duration: 394.581216ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-14T20:16:16.492343Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-10-14T20:16:16.097426Z","time spent":"394.902369ms","remote":"127.0.0.1:56424","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":27,"request content":"key:\"/registry/health\" "}
	{"level":"info","ts":"2025-10-14T20:17:07.621730Z","caller":"traceutil/trace.go:172","msg":"trace[476807296] linearizableReadLoop","detail":"{readStateIndex:822; appliedIndex:822; }","duration":"144.505606ms","start":"2025-10-14T20:17:07.477185Z","end":"2025-10-14T20:17:07.621690Z","steps":["trace[476807296] 'read index received'  (duration: 144.494662ms)","trace[476807296] 'applied index is now lower than readState.Index'  (duration: 9.281µs)"],"step_count":2}
	{"level":"warn","ts":"2025-10-14T20:17:07.703631Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"226.363852ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/deviceclasses\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"warn","ts":"2025-10-14T20:17:07.703632Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"193.950167ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" limit:1 ","response":"range_response_count:1 size:1118"}
	{"level":"info","ts":"2025-10-14T20:17:07.703707Z","caller":"traceutil/trace.go:172","msg":"trace[637731438] range","detail":"{range_begin:/registry/deviceclasses; range_end:; response_count:0; response_revision:769; }","duration":"226.536096ms","start":"2025-10-14T20:17:07.477156Z","end":"2025-10-14T20:17:07.703692Z","steps":["trace[637731438] 'agreement among raft nodes before linearized reading'  (duration: 144.718588ms)","trace[637731438] 'range keys from in-memory index tree'  (duration: 81.573948ms)"],"step_count":2}
	{"level":"info","ts":"2025-10-14T20:17:07.703750Z","caller":"traceutil/trace.go:172","msg":"trace[300380464] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:769; }","duration":"194.082801ms","start":"2025-10-14T20:17:07.509635Z","end":"2025-10-14T20:17:07.703718Z","steps":["trace[300380464] 'agreement among raft nodes before linearized reading'  (duration: 193.851517ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-14T20:17:25.661457Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"200.660479ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/daemonsets\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-14T20:17:25.662115Z","caller":"traceutil/trace.go:172","msg":"trace[863305639] range","detail":"{range_begin:/registry/daemonsets; range_end:; response_count:0; response_revision:789; }","duration":"201.331327ms","start":"2025-10-14T20:17:25.460771Z","end":"2025-10-14T20:17:25.662102Z","steps":["trace[863305639] 'range keys from in-memory index tree'  (duration: 200.593695ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-14T20:17:26.071049Z","caller":"traceutil/trace.go:172","msg":"trace[568462179] linearizableReadLoop","detail":"{readStateIndex:846; appliedIndex:846; }","duration":"156.459804ms","start":"2025-10-14T20:17:25.914567Z","end":"2025-10-14T20:17:26.071027Z","steps":["trace[568462179] 'read index received'  (duration: 156.453449ms)","trace[568462179] 'applied index is now lower than readState.Index'  (duration: 5.25µs)"],"step_count":2}
	{"level":"info","ts":"2025-10-14T20:17:26.071492Z","caller":"traceutil/trace.go:172","msg":"trace[721736249] transaction","detail":"{read_only:false; response_revision:790; number_of_response:1; }","duration":"244.727624ms","start":"2025-10-14T20:17:25.826750Z","end":"2025-10-14T20:17:26.071478Z","steps":["trace[721736249] 'process raft request'  (duration: 244.613551ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-14T20:17:26.072642Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"101.642916ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/flowschemas\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-14T20:17:26.072798Z","caller":"traceutil/trace.go:172","msg":"trace[1690231268] range","detail":"{range_begin:/registry/flowschemas; range_end:; response_count:0; response_revision:790; }","duration":"101.954334ms","start":"2025-10-14T20:17:25.970831Z","end":"2025-10-14T20:17:26.072786Z","steps":["trace[1690231268] 'agreement among raft nodes before linearized reading'  (duration: 101.618495ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-14T20:17:26.074196Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"156.793051ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/controllers\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-14T20:17:26.074900Z","caller":"traceutil/trace.go:172","msg":"trace[1848611105] range","detail":"{range_begin:/registry/controllers; range_end:; response_count:0; response_revision:789; }","duration":"160.326012ms","start":"2025-10-14T20:17:25.914561Z","end":"2025-10-14T20:17:26.074887Z","steps":["trace[1848611105] 'agreement among raft nodes before linearized reading'  (duration: 156.753436ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-14T20:18:20.803355Z","caller":"traceutil/trace.go:172","msg":"trace[705878479] transaction","detail":"{read_only:false; response_revision:857; number_of_response:1; }","duration":"282.105022ms","start":"2025-10-14T20:18:20.521191Z","end":"2025-10-14T20:18:20.803296Z","steps":["trace[705878479] 'process raft request'  (duration: 281.981069ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-14T20:25:57.636243Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1058}
	{"level":"info","ts":"2025-10-14T20:25:57.660050Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1058,"took":"23.349607ms","hash":700514830,"current-db-size-bytes":3358720,"current-db-size":"3.4 MB","current-db-size-in-use-bytes":1413120,"current-db-size-in-use":"1.4 MB"}
	{"level":"info","ts":"2025-10-14T20:25:57.660100Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":700514830,"revision":1058,"compact-revision":-1}
	{"level":"info","ts":"2025-10-14T20:30:57.644113Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1341}
	{"level":"info","ts":"2025-10-14T20:30:57.648176Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1341,"took":"3.744895ms","hash":3620692745,"current-db-size-bytes":3358720,"current-db-size":"3.4 MB","current-db-size-in-use-bytes":1880064,"current-db-size-in-use":"1.9 MB"}
	{"level":"info","ts":"2025-10-14T20:30:57.648221Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":3620692745,"revision":1341,"compact-revision":1058}
	
	
	==> kernel <==
	 20:34:21 up 18 min,  0 users,  load average: 0.08, 0.12, 0.16
	Linux embed-certs-158674 6.6.95 #1 SMP PREEMPT_DYNAMIC Thu Sep 18 15:48:18 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [1e686aedaa84ab231d58a6ab25ad7d3c53ad049d73f8339a8b04909f1dc8ced8] <==
	I1014 20:31:01.008245       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1014 20:31:01.008283       1 handler_proxy.go:99] no RequestInfo found in the context
	E1014 20:31:01.008324       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1014 20:31:01.009270       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1014 20:32:01.009424       1 handler_proxy.go:99] no RequestInfo found in the context
	W1014 20:32:01.009480       1 handler_proxy.go:99] no RequestInfo found in the context
	E1014 20:32:01.009494       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I1014 20:32:01.009508       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E1014 20:32:01.009524       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1014 20:32:01.010834       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1014 20:34:01.010539       1 handler_proxy.go:99] no RequestInfo found in the context
	E1014 20:34:01.010661       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I1014 20:34:01.010679       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1014 20:34:01.011837       1 handler_proxy.go:99] no RequestInfo found in the context
	E1014 20:34:01.011915       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1014 20:34:01.011962       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [760bc07e5c70450d801479d1c3dca4c1f4755d6173b8ade6dce7188b7cf47003] <==
	I1014 20:28:04.825888       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1014 20:28:34.677552       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1014 20:28:34.834385       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1014 20:29:04.684516       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1014 20:29:04.843062       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1014 20:29:34.689297       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1014 20:29:34.850431       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1014 20:30:04.696650       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1014 20:30:04.867812       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1014 20:30:34.703834       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1014 20:30:34.877317       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1014 20:31:04.709495       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1014 20:31:04.886159       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1014 20:31:34.716398       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1014 20:31:34.893626       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1014 20:32:04.722610       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1014 20:32:04.906456       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1014 20:32:34.727602       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1014 20:32:34.915415       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1014 20:33:04.736639       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1014 20:33:04.924238       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1014 20:33:34.747176       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1014 20:33:34.932577       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1014 20:34:04.757545       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1014 20:34:04.946187       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	
	
	==> kube-proxy [c43842bd6420c9d4f21a272e0491c1c894b300385fb8de9d017693e0cace060f] <==
	I1014 20:16:01.363108       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1014 20:16:01.464278       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1014 20:16:01.464315       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.50.78"]
	E1014 20:16:01.464567       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1014 20:16:01.501979       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1014 20:16:01.502064       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1014 20:16:01.502093       1 server_linux.go:132] "Using iptables Proxier"
	I1014 20:16:01.511630       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1014 20:16:01.512349       1 server.go:527] "Version info" version="v1.34.1"
	I1014 20:16:01.512401       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1014 20:16:01.519602       1 config.go:200] "Starting service config controller"
	I1014 20:16:01.519619       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1014 20:16:01.519762       1 config.go:106] "Starting endpoint slice config controller"
	I1014 20:16:01.519767       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1014 20:16:01.519891       1 config.go:403] "Starting serviceCIDR config controller"
	I1014 20:16:01.519895       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1014 20:16:01.523036       1 config.go:309] "Starting node config controller"
	I1014 20:16:01.524069       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1014 20:16:01.524226       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1014 20:16:01.619859       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1014 20:16:01.620059       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1014 20:16:01.620144       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [5ad28306f0acdab374f3c55d2549350c2887aa8919aad75177b3c04c6c780a4e] <==
	I1014 20:15:58.449475       1 serving.go:386] Generated self-signed cert in-memory
	I1014 20:16:00.098228       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1014 20:16:00.098300       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1014 20:16:00.109790       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1014 20:16:00.111004       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1014 20:16:00.114656       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1014 20:16:00.111021       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1014 20:16:00.114772       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1014 20:16:00.111038       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1014 20:16:00.110967       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1014 20:16:00.115405       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1014 20:16:00.215663       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1014 20:16:00.216011       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1014 20:16:00.216410       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	
	
	==> kubelet <==
	Oct 14 20:33:34 embed-certs-158674 kubelet[1212]: E1014 20:33:34.745447    1212 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1760474014744742941  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:176956}  inodes_used:{value:65}}"
	Oct 14 20:33:34 embed-certs-158674 kubelet[1212]: E1014 20:33:34.745634    1212 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1760474014744742941  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:176956}  inodes_used:{value:65}}"
	Oct 14 20:33:36 embed-certs-158674 kubelet[1212]: I1014 20:33:36.408208    1212 scope.go:117] "RemoveContainer" containerID="4935f639000407412f4a4395f8cd245e376c2ada7dc374b85173c3b4dd7b339b"
	Oct 14 20:33:36 embed-certs-158674 kubelet[1212]: E1014 20:33:36.408363    1212 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-mz5cm_kubernetes-dashboard(30fef46b-43ef-4af7-b50b-ba3f07a7afde)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-mz5cm" podUID="30fef46b-43ef-4af7-b50b-ba3f07a7afde"
	Oct 14 20:33:36 embed-certs-158674 kubelet[1212]: E1014 20:33:36.409357    1212 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: fetching target platform image selected from manifest list: reading manifest sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-lhkkm" podUID="11c1df79-7653-4919-a97e-456c684eec60"
	Oct 14 20:33:44 embed-certs-158674 kubelet[1212]: E1014 20:33:44.748149    1212 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1760474024747483949  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:176956}  inodes_used:{value:65}}"
	Oct 14 20:33:44 embed-certs-158674 kubelet[1212]: E1014 20:33:44.748174    1212 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1760474024747483949  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:176956}  inodes_used:{value:65}}"
	Oct 14 20:33:45 embed-certs-158674 kubelet[1212]: E1014 20:33:45.408619    1212 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-746fcd58dc-rbchd" podUID="22af9765-88b5-40e9-886d-a9ed5c464bb5"
	Oct 14 20:33:49 embed-certs-158674 kubelet[1212]: E1014 20:33:49.408839    1212 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: fetching target platform image selected from manifest list: reading manifest sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-lhkkm" podUID="11c1df79-7653-4919-a97e-456c684eec60"
	Oct 14 20:33:50 embed-certs-158674 kubelet[1212]: I1014 20:33:50.407235    1212 scope.go:117] "RemoveContainer" containerID="4935f639000407412f4a4395f8cd245e376c2ada7dc374b85173c3b4dd7b339b"
	Oct 14 20:33:50 embed-certs-158674 kubelet[1212]: E1014 20:33:50.407383    1212 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-mz5cm_kubernetes-dashboard(30fef46b-43ef-4af7-b50b-ba3f07a7afde)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-mz5cm" podUID="30fef46b-43ef-4af7-b50b-ba3f07a7afde"
	Oct 14 20:33:54 embed-certs-158674 kubelet[1212]: E1014 20:33:54.749595    1212 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1760474034749262495  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:176956}  inodes_used:{value:65}}"
	Oct 14 20:33:54 embed-certs-158674 kubelet[1212]: E1014 20:33:54.749619    1212 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1760474034749262495  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:176956}  inodes_used:{value:65}}"
	Oct 14 20:33:59 embed-certs-158674 kubelet[1212]: E1014 20:33:59.408510    1212 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-746fcd58dc-rbchd" podUID="22af9765-88b5-40e9-886d-a9ed5c464bb5"
	Oct 14 20:34:03 embed-certs-158674 kubelet[1212]: I1014 20:34:03.408009    1212 scope.go:117] "RemoveContainer" containerID="4935f639000407412f4a4395f8cd245e376c2ada7dc374b85173c3b4dd7b339b"
	Oct 14 20:34:03 embed-certs-158674 kubelet[1212]: E1014 20:34:03.408343    1212 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-mz5cm_kubernetes-dashboard(30fef46b-43ef-4af7-b50b-ba3f07a7afde)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-mz5cm" podUID="30fef46b-43ef-4af7-b50b-ba3f07a7afde"
	Oct 14 20:34:04 embed-certs-158674 kubelet[1212]: E1014 20:34:04.409847    1212 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: fetching target platform image selected from manifest list: reading manifest sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-lhkkm" podUID="11c1df79-7653-4919-a97e-456c684eec60"
	Oct 14 20:34:04 embed-certs-158674 kubelet[1212]: E1014 20:34:04.755207    1212 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1760474044753611838  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:176956}  inodes_used:{value:65}}"
	Oct 14 20:34:04 embed-certs-158674 kubelet[1212]: E1014 20:34:04.755234    1212 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1760474044753611838  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:176956}  inodes_used:{value:65}}"
	Oct 14 20:34:11 embed-certs-158674 kubelet[1212]: E1014 20:34:11.409190    1212 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-746fcd58dc-rbchd" podUID="22af9765-88b5-40e9-886d-a9ed5c464bb5"
	Oct 14 20:34:14 embed-certs-158674 kubelet[1212]: E1014 20:34:14.756669    1212 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1760474054756326008  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:176956}  inodes_used:{value:65}}"
	Oct 14 20:34:14 embed-certs-158674 kubelet[1212]: E1014 20:34:14.756704    1212 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1760474054756326008  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:176956}  inodes_used:{value:65}}"
	Oct 14 20:34:15 embed-certs-158674 kubelet[1212]: E1014 20:34:15.409163    1212 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: fetching target platform image selected from manifest list: reading manifest sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-lhkkm" podUID="11c1df79-7653-4919-a97e-456c684eec60"
	Oct 14 20:34:17 embed-certs-158674 kubelet[1212]: I1014 20:34:17.407320    1212 scope.go:117] "RemoveContainer" containerID="4935f639000407412f4a4395f8cd245e376c2ada7dc374b85173c3b4dd7b339b"
	Oct 14 20:34:17 embed-certs-158674 kubelet[1212]: E1014 20:34:17.407522    1212 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-mz5cm_kubernetes-dashboard(30fef46b-43ef-4af7-b50b-ba3f07a7afde)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-mz5cm" podUID="30fef46b-43ef-4af7-b50b-ba3f07a7afde"
	
	
	==> storage-provisioner [003294c62d4e668811ca87e3130bdcfb3915e9bebe89b0c4f4ca79a82b670740] <==
	I1014 20:16:01.122238       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1014 20:16:31.141250       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [329ec9ece1ee87ff29d15aa2b946df57d3f4b3baa58d36ea71db147fb16de05e] <==
	W1014 20:33:55.690902       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1014 20:33:57.694153       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1014 20:33:57.699301       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1014 20:33:59.703452       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1014 20:33:59.718342       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1014 20:34:01.722778       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1014 20:34:01.727789       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1014 20:34:03.731215       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1014 20:34:03.736438       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1014 20:34:05.740200       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1014 20:34:05.745502       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1014 20:34:07.750168       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1014 20:34:07.761080       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1014 20:34:09.764791       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1014 20:34:09.772568       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1014 20:34:11.775707       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1014 20:34:11.780827       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1014 20:34:13.784079       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1014 20:34:13.789558       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1014 20:34:15.793602       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1014 20:34:15.799273       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1014 20:34:17.802103       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1014 20:34:17.810413       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1014 20:34:19.813618       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1014 20:34:19.819456       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-158674 -n embed-certs-158674
helpers_test.go:269: (dbg) Run:  kubectl --context embed-certs-158674 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: metrics-server-746fcd58dc-rbchd kubernetes-dashboard-855c9754f9-lhkkm
helpers_test.go:282: ======> post-mortem[TestStartStop/group/embed-certs/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context embed-certs-158674 describe pod metrics-server-746fcd58dc-rbchd kubernetes-dashboard-855c9754f9-lhkkm
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context embed-certs-158674 describe pod metrics-server-746fcd58dc-rbchd kubernetes-dashboard-855c9754f9-lhkkm: exit status 1 (64.541038ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-746fcd58dc-rbchd" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-lhkkm" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context embed-certs-158674 describe pod metrics-server-746fcd58dc-rbchd kubernetes-dashboard-855c9754f9-lhkkm: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (542.70s)

                                                
                                    

Test pass (278/324)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 21.9
4 TestDownloadOnly/v1.28.0/preload-exists 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.07
9 TestDownloadOnly/v1.28.0/DeleteAll 0.15
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.14
12 TestDownloadOnly/v1.34.1/json-events 11.32
13 TestDownloadOnly/v1.34.1/preload-exists 0
17 TestDownloadOnly/v1.34.1/LogsDuration 0.06
18 TestDownloadOnly/v1.34.1/DeleteAll 0.15
19 TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds 0.14
21 TestBinaryMirror 0.66
22 TestOffline 87.58
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.06
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.06
27 TestAddons/Setup 155.39
31 TestAddons/serial/GCPAuth/Namespaces 0.16
32 TestAddons/serial/GCPAuth/FakeCredentials 10.54
35 TestAddons/parallel/Registry 18.13
36 TestAddons/parallel/RegistryCreds 0.82
38 TestAddons/parallel/InspektorGadget 5.31
39 TestAddons/parallel/MetricsServer 6.18
41 TestAddons/parallel/CSI 59.19
42 TestAddons/parallel/Headlamp 20.77
43 TestAddons/parallel/CloudSpanner 6.61
44 TestAddons/parallel/LocalPath 61.03
45 TestAddons/parallel/NvidiaDevicePlugin 6.8
46 TestAddons/parallel/Yakd 12.53
48 TestAddons/StoppedEnableDisable 86.01
49 TestCertOptions 85.57
50 TestCertExpiration 290.79
52 TestForceSystemdFlag 66.77
53 TestForceSystemdEnv 65.96
55 TestKVMDriverInstallOrUpdate 1.43
59 TestErrorSpam/setup 39.34
60 TestErrorSpam/start 0.37
61 TestErrorSpam/status 0.79
62 TestErrorSpam/pause 1.69
63 TestErrorSpam/unpause 1.82
64 TestErrorSpam/stop 5.09
67 TestFunctional/serial/CopySyncFile 0
68 TestFunctional/serial/StartWithProxy 53.1
69 TestFunctional/serial/AuditLog 0
70 TestFunctional/serial/SoftStart 39.46
71 TestFunctional/serial/KubeContext 0.05
72 TestFunctional/serial/KubectlGetPods 0.1
75 TestFunctional/serial/CacheCmd/cache/add_remote 3.47
76 TestFunctional/serial/CacheCmd/cache/add_local 2.15
77 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.05
78 TestFunctional/serial/CacheCmd/cache/list 0.05
79 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.24
80 TestFunctional/serial/CacheCmd/cache/cache_reload 1.74
81 TestFunctional/serial/CacheCmd/cache/delete 0.1
82 TestFunctional/serial/MinikubeKubectlCmd 0.12
83 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.11
84 TestFunctional/serial/ExtraConfig 37.69
85 TestFunctional/serial/ComponentHealth 0.07
86 TestFunctional/serial/LogsCmd 1.5
87 TestFunctional/serial/LogsFileCmd 1.48
88 TestFunctional/serial/InvalidService 3.9
90 TestFunctional/parallel/ConfigCmd 0.36
91 TestFunctional/parallel/DashboardCmd 17.73
92 TestFunctional/parallel/DryRun 0.3
93 TestFunctional/parallel/InternationalLanguage 0.15
94 TestFunctional/parallel/StatusCmd 0.91
98 TestFunctional/parallel/ServiceCmdConnect 22.55
99 TestFunctional/parallel/AddonsCmd 0.16
100 TestFunctional/parallel/PersistentVolumeClaim 38.29
102 TestFunctional/parallel/SSHCmd 0.39
103 TestFunctional/parallel/CpCmd 1.51
104 TestFunctional/parallel/MySQL 28.17
105 TestFunctional/parallel/FileSync 0.22
106 TestFunctional/parallel/CertSync 1.36
110 TestFunctional/parallel/NodeLabels 0.07
112 TestFunctional/parallel/NonActiveRuntimeDisabled 0.46
114 TestFunctional/parallel/License 0.36
115 TestFunctional/parallel/Version/short 0.06
116 TestFunctional/parallel/Version/components 0.76
118 TestFunctional/parallel/ImageCommands/ImageListTable 0.22
119 TestFunctional/parallel/ImageCommands/ImageListJson 0.23
120 TestFunctional/parallel/ImageCommands/ImageListYaml 0.25
121 TestFunctional/parallel/ImageCommands/ImageBuild 5.62
122 TestFunctional/parallel/ImageCommands/Setup 1.76
123 TestFunctional/parallel/UpdateContextCmd/no_changes 0.1
124 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.11
125 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.1
126 TestFunctional/parallel/ServiceCmd/DeployApp 9.2
127 TestFunctional/parallel/ProfileCmd/profile_not_create 0.38
128 TestFunctional/parallel/ProfileCmd/profile_list 0.39
129 TestFunctional/parallel/ProfileCmd/profile_json_output 0.35
130 TestFunctional/parallel/MountCmd/any-port 8.56
131 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.67
132 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.03
133 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.7
134 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.56
135 TestFunctional/parallel/ImageCommands/ImageRemove 0.58
136 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.72
137 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.57
138 TestFunctional/parallel/ServiceCmd/List 0.9
139 TestFunctional/parallel/MountCmd/specific-port 1.96
140 TestFunctional/parallel/ServiceCmd/JSONOutput 0.84
141 TestFunctional/parallel/ServiceCmd/HTTPS 0.3
142 TestFunctional/parallel/ServiceCmd/Format 0.31
143 TestFunctional/parallel/ServiceCmd/URL 0.37
144 TestFunctional/parallel/MountCmd/VerifyCleanup 1.8
154 TestFunctional/delete_echo-server_images 0.04
155 TestFunctional/delete_my-image_image 0.02
156 TestFunctional/delete_minikube_cached_images 0.02
161 TestMultiControlPlane/serial/StartCluster 199.49
162 TestMultiControlPlane/serial/DeployApp 7
163 TestMultiControlPlane/serial/PingHostFromPods 1.24
164 TestMultiControlPlane/serial/AddWorkerNode 47.18
165 TestMultiControlPlane/serial/NodeLabels 0.07
166 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.88
167 TestMultiControlPlane/serial/CopyFile 13.34
168 TestMultiControlPlane/serial/StopSecondaryNode 82.73
169 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.68
170 TestMultiControlPlane/serial/RestartSecondaryNode 33.78
171 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 1.02
172 TestMultiControlPlane/serial/RestartClusterKeepsNodes 386.42
173 TestMultiControlPlane/serial/DeleteSecondaryNode 17.64
174 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.65
175 TestMultiControlPlane/serial/StopCluster 262.49
176 TestMultiControlPlane/serial/RestartCluster 102.72
177 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.65
178 TestMultiControlPlane/serial/AddSecondaryNode 77.47
179 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.89
183 TestJSONOutput/start/Command 80.02
184 TestJSONOutput/start/Audit 0
186 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
187 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
189 TestJSONOutput/pause/Command 0.75
190 TestJSONOutput/pause/Audit 0
192 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
193 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
195 TestJSONOutput/unpause/Command 0.7
196 TestJSONOutput/unpause/Audit 0
198 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
199 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
201 TestJSONOutput/stop/Command 7.17
202 TestJSONOutput/stop/Audit 0
204 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
205 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
206 TestErrorJSONOutput 0.22
211 TestMainNoArgs 0.05
212 TestMinikubeProfile 83.95
215 TestMountStart/serial/StartWithMountFirst 22.03
216 TestMountStart/serial/VerifyMountFirst 0.39
217 TestMountStart/serial/StartWithMountSecond 21.47
218 TestMountStart/serial/VerifyMountSecond 0.38
219 TestMountStart/serial/DeleteFirst 0.71
220 TestMountStart/serial/VerifyMountPostDelete 0.39
221 TestMountStart/serial/Stop 1.26
222 TestMountStart/serial/RestartStopped 19.64
223 TestMountStart/serial/VerifyMountPostStop 0.4
226 TestMultiNode/serial/FreshStart2Nodes 99.08
227 TestMultiNode/serial/DeployApp2Nodes 5.91
228 TestMultiNode/serial/PingHostFrom2Pods 0.81
229 TestMultiNode/serial/AddNode 43.37
230 TestMultiNode/serial/MultiNodeLabels 0.07
231 TestMultiNode/serial/ProfileList 0.6
232 TestMultiNode/serial/CopyFile 7.42
233 TestMultiNode/serial/StopNode 2.43
234 TestMultiNode/serial/StartAfterStop 37.05
235 TestMultiNode/serial/RestartKeepsNodes 273.32
236 TestMultiNode/serial/DeleteNode 2.95
237 TestMultiNode/serial/StopMultiNode 176.72
238 TestMultiNode/serial/RestartMultiNode 86.38
239 TestMultiNode/serial/ValidateNameConflict 39.7
246 TestScheduledStopUnix 108.4
250 TestRunningBinaryUpgrade 149.03
252 TestKubernetesUpgrade 130.84
255 TestNoKubernetes/serial/StartNoK8sWithVersion 0.09
257 TestPause/serial/Start 106.31
258 TestNoKubernetes/serial/StartWithK8s 82.84
259 TestNoKubernetes/serial/StartWithStopK8s 8.06
260 TestNoKubernetes/serial/Start 37.78
262 TestNoKubernetes/serial/VerifyK8sNotRunning 0.21
263 TestNoKubernetes/serial/ProfileList 5.73
264 TestNoKubernetes/serial/Stop 1.31
265 TestNoKubernetes/serial/StartNoArgs 40.73
273 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.23
274 TestStoppedBinaryUpgrade/Setup 2.62
275 TestStoppedBinaryUpgrade/Upgrade 130.58
283 TestNetworkPlugins/group/false 3.35
288 TestStartStop/group/old-k8s-version/serial/FirstStart 71.13
290 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 89.42
291 TestStoppedBinaryUpgrade/MinikubeLogs 1.04
293 TestStartStop/group/no-preload/serial/FirstStart 126.24
294 TestStartStop/group/old-k8s-version/serial/DeployApp 11.38
295 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.18
296 TestStartStop/group/old-k8s-version/serial/Stop 86.43
297 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 10.35
298 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.13
299 TestStartStop/group/default-k8s-diff-port/serial/Stop 81.17
300 TestStartStop/group/no-preload/serial/DeployApp 10.36
301 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.23
302 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.23
303 TestStartStop/group/old-k8s-version/serial/SecondStart 44.74
304 TestStartStop/group/no-preload/serial/Stop 82.56
306 TestStartStop/group/embed-certs/serial/FirstStart 55.81
307 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.21
308 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 65.33
309 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 12.01
310 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.1
311 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.25
312 TestStartStop/group/old-k8s-version/serial/Pause 3.46
314 TestStartStop/group/newest-cni/serial/FirstStart 49.56
315 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.23
316 TestStartStop/group/no-preload/serial/SecondStart 65.65
317 TestStartStop/group/embed-certs/serial/DeployApp 10.48
318 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.39
319 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 8.01
320 TestStartStop/group/embed-certs/serial/Stop 82.53
321 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.1
322 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.36
323 TestStartStop/group/default-k8s-diff-port/serial/Pause 3.34
324 TestStartStop/group/newest-cni/serial/DeployApp 0
325 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.72
326 TestNetworkPlugins/group/auto/Start 82.02
327 TestStartStop/group/newest-cni/serial/Stop 8.06
328 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.22
329 TestStartStop/group/newest-cni/serial/SecondStart 45.55
330 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 14.01
331 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.12
332 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.29
333 TestStartStop/group/no-preload/serial/Pause 3.13
334 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
335 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
336 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.34
337 TestNetworkPlugins/group/kindnet/Start 94.59
338 TestStartStop/group/newest-cni/serial/Pause 3.94
339 TestNetworkPlugins/group/calico/Start 103.97
340 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.2
341 TestStartStop/group/embed-certs/serial/SecondStart 80.98
342 TestNetworkPlugins/group/auto/KubeletFlags 0.23
343 TestNetworkPlugins/group/auto/NetCatPod 12.06
344 TestNetworkPlugins/group/auto/DNS 0.18
345 TestNetworkPlugins/group/auto/Localhost 0.14
346 TestNetworkPlugins/group/auto/HairPin 0.12
347 TestNetworkPlugins/group/custom-flannel/Start 76.83
349 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
350 TestNetworkPlugins/group/kindnet/KubeletFlags 0.23
351 TestNetworkPlugins/group/kindnet/NetCatPod 13.25
352 TestNetworkPlugins/group/calico/ControllerPod 6.01
353 TestNetworkPlugins/group/kindnet/DNS 0.18
354 TestNetworkPlugins/group/kindnet/Localhost 0.15
355 TestNetworkPlugins/group/kindnet/HairPin 0.15
356 TestNetworkPlugins/group/calico/KubeletFlags 0.24
357 TestNetworkPlugins/group/calico/NetCatPod 31.37
358 TestNetworkPlugins/group/enable-default-cni/Start 86.95
359 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.23
360 TestNetworkPlugins/group/custom-flannel/NetCatPod 10.26
361 TestNetworkPlugins/group/custom-flannel/DNS 0.17
362 TestNetworkPlugins/group/custom-flannel/Localhost 0.14
363 TestNetworkPlugins/group/custom-flannel/HairPin 0.14
364 TestNetworkPlugins/group/calico/DNS 0.16
365 TestNetworkPlugins/group/calico/Localhost 0.12
366 TestNetworkPlugins/group/calico/HairPin 0.13
367 TestNetworkPlugins/group/flannel/Start 78.89
368 TestNetworkPlugins/group/bridge/Start 104.83
369 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.22
370 TestNetworkPlugins/group/enable-default-cni/NetCatPod 10.46
371 TestNetworkPlugins/group/enable-default-cni/DNS 0.16
372 TestNetworkPlugins/group/enable-default-cni/Localhost 0.14
373 TestNetworkPlugins/group/enable-default-cni/HairPin 0.14
374 TestNetworkPlugins/group/flannel/ControllerPod 6.01
375 TestNetworkPlugins/group/flannel/KubeletFlags 0.21
376 TestNetworkPlugins/group/flannel/NetCatPod 10.25
377 TestNetworkPlugins/group/flannel/DNS 0.15
378 TestNetworkPlugins/group/flannel/Localhost 0.13
379 TestNetworkPlugins/group/flannel/HairPin 0.13
380 TestNetworkPlugins/group/bridge/KubeletFlags 0.22
381 TestNetworkPlugins/group/bridge/NetCatPod 10.26
382 TestNetworkPlugins/group/bridge/DNS 0.15
383 TestNetworkPlugins/group/bridge/Localhost 0.12
384 TestNetworkPlugins/group/bridge/HairPin 0.13
386 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.23
387 TestStartStop/group/embed-certs/serial/Pause 2.69
x
+
TestDownloadOnly/v1.28.0/json-events (21.9s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-775287 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-775287 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (21.897292504s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (21.90s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I1014 19:10:37.471594  368634 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
I1014 19:10:37.471755  368634 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21409-364627/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-775287
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-775287: exit status 85 (65.333628ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                                                ARGS                                                                                                 │       PROFILE        │  USER   │ VERSION │     START TIME      │ END TIME │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-775287 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio --auto-update-drivers=false │ download-only-775287 │ jenkins │ v1.37.0 │ 14 Oct 25 19:10 UTC │          │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/14 19:10:15
	Running on machine: ubuntu-20-agent-6
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1014 19:10:15.617359  368646 out.go:360] Setting OutFile to fd 1 ...
	I1014 19:10:15.617587  368646 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1014 19:10:15.617596  368646 out.go:374] Setting ErrFile to fd 2...
	I1014 19:10:15.617600  368646 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1014 19:10:15.617815  368646 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-364627/.minikube/bin
	W1014 19:10:15.617938  368646 root.go:314] Error reading config file at /home/jenkins/minikube-integration/21409-364627/.minikube/config/config.json: open /home/jenkins/minikube-integration/21409-364627/.minikube/config/config.json: no such file or directory
	I1014 19:10:15.618445  368646 out.go:368] Setting JSON to true
	I1014 19:10:15.619451  368646 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":3159,"bootTime":1760465857,"procs":229,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1014 19:10:15.619545  368646 start.go:141] virtualization: kvm guest
	I1014 19:10:15.621715  368646 out.go:99] [download-only-775287] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1014 19:10:15.621839  368646 notify.go:220] Checking for updates...
	W1014 19:10:15.621863  368646 preload.go:349] Failed to list preload files: open /home/jenkins/minikube-integration/21409-364627/.minikube/cache/preloaded-tarball: no such file or directory
	I1014 19:10:15.623004  368646 out.go:171] MINIKUBE_LOCATION=21409
	I1014 19:10:15.624229  368646 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1014 19:10:15.625444  368646 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21409-364627/kubeconfig
	I1014 19:10:15.626760  368646 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21409-364627/.minikube
	I1014 19:10:15.627786  368646 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1014 19:10:15.629703  368646 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1014 19:10:15.629938  368646 driver.go:421] Setting default libvirt URI to qemu:///system
	I1014 19:10:15.661631  368646 out.go:99] Using the kvm2 driver based on user configuration
	I1014 19:10:15.661664  368646 start.go:305] selected driver: kvm2
	I1014 19:10:15.661669  368646 start.go:925] validating driver "kvm2" against <nil>
	I1014 19:10:15.662013  368646 install.go:66] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1014 19:10:15.662128  368646 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21409-364627/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1014 19:10:15.677854  368646 install.go:163] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I1014 19:10:15.677885  368646 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21409-364627/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1014 19:10:15.691390  368646 install.go:163] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I1014 19:10:15.691443  368646 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1014 19:10:15.691984  368646 start_flags.go:410] Using suggested 6144MB memory alloc based on sys=32093MB, container=0MB
	I1014 19:10:15.692126  368646 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1014 19:10:15.692150  368646 cni.go:84] Creating CNI manager for ""
	I1014 19:10:15.692195  368646 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1014 19:10:15.692206  368646 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1014 19:10:15.692253  368646 start.go:349] cluster config:
	{Name:download-only-775287 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:6144 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:download-only-775287 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1014 19:10:15.692532  368646 iso.go:125] acquiring lock: {Name:mk340b9c7aeea9c9c1eaad0b6560c7e40df5ab00 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1014 19:10:15.694415  368646 out.go:99] Downloading VM boot image ...
	I1014 19:10:15.694478  368646 download.go:108] Downloading: https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso.sha256 -> /home/jenkins/minikube-integration/21409-364627/.minikube/cache/iso/amd64/minikube-v1.37.0-1758198818-20370-amd64.iso
	I1014 19:10:25.347608  368646 out.go:99] Starting "download-only-775287" primary control-plane node in "download-only-775287" cluster
	I1014 19:10:25.347632  368646 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1014 19:10:25.437592  368646 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
	I1014 19:10:25.437632  368646 cache.go:58] Caching tarball of preloaded images
	I1014 19:10:25.437819  368646 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1014 19:10:25.439566  368646 out.go:99] Downloading Kubernetes v1.28.0 preload ...
	I1014 19:10:25.439594  368646 preload.go:313] getting checksum for preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4 from gcs api...
	I1014 19:10:25.539866  368646 preload.go:290] Got checksum from GCS API "72bc7f8573f574c02d8c9a9b3496176b"
	I1014 19:10:25.539997  368646 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:72bc7f8573f574c02d8c9a9b3496176b -> /home/jenkins/minikube-integration/21409-364627/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
	
	
	* The control-plane node download-only-775287 host does not exist
	  To start a cluster, run: "minikube start -p download-only-775287"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (0.15s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-775287
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/json-events (11.32s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-480467 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-480467 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (11.32449541s)
--- PASS: TestDownloadOnly/v1.34.1/json-events (11.32s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/preload-exists
I1014 19:10:49.149508  368634 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
I1014 19:10:49.149567  368634 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21409-364627/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.34.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-480467
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-480467: exit status 85 (63.689113ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                ARGS                                                                                                 │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-775287 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio --auto-update-drivers=false │ download-only-775287 │ jenkins │ v1.37.0 │ 14 Oct 25 19:10 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                               │ minikube             │ jenkins │ v1.37.0 │ 14 Oct 25 19:10 UTC │ 14 Oct 25 19:10 UTC │
	│ delete  │ -p download-only-775287                                                                                                                                                                             │ download-only-775287 │ jenkins │ v1.37.0 │ 14 Oct 25 19:10 UTC │ 14 Oct 25 19:10 UTC │
	│ start   │ -o=json --download-only -p download-only-480467 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=kvm2  --container-runtime=crio --auto-update-drivers=false │ download-only-480467 │ jenkins │ v1.37.0 │ 14 Oct 25 19:10 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/14 19:10:37
	Running on machine: ubuntu-20-agent-6
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1014 19:10:37.868213  368884 out.go:360] Setting OutFile to fd 1 ...
	I1014 19:10:37.868659  368884 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1014 19:10:37.868673  368884 out.go:374] Setting ErrFile to fd 2...
	I1014 19:10:37.868678  368884 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1014 19:10:37.868868  368884 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-364627/.minikube/bin
	I1014 19:10:37.869416  368884 out.go:368] Setting JSON to true
	I1014 19:10:37.870372  368884 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":3181,"bootTime":1760465857,"procs":197,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1014 19:10:37.870473  368884 start.go:141] virtualization: kvm guest
	I1014 19:10:37.872625  368884 out.go:99] [download-only-480467] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1014 19:10:37.872810  368884 notify.go:220] Checking for updates...
	I1014 19:10:37.874189  368884 out.go:171] MINIKUBE_LOCATION=21409
	I1014 19:10:37.875765  368884 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1014 19:10:37.877138  368884 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21409-364627/kubeconfig
	I1014 19:10:37.878435  368884 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21409-364627/.minikube
	I1014 19:10:37.879859  368884 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1014 19:10:37.882454  368884 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1014 19:10:37.882743  368884 driver.go:421] Setting default libvirt URI to qemu:///system
	I1014 19:10:37.914359  368884 out.go:99] Using the kvm2 driver based on user configuration
	I1014 19:10:37.914407  368884 start.go:305] selected driver: kvm2
	I1014 19:10:37.914416  368884 start.go:925] validating driver "kvm2" against <nil>
	I1014 19:10:37.914768  368884 install.go:66] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1014 19:10:37.914874  368884 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21409-364627/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1014 19:10:37.929219  368884 install.go:163] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I1014 19:10:37.929256  368884 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21409-364627/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1014 19:10:37.943300  368884 install.go:163] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I1014 19:10:37.943376  368884 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1014 19:10:37.943948  368884 start_flags.go:410] Using suggested 6144MB memory alloc based on sys=32093MB, container=0MB
	I1014 19:10:37.944111  368884 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1014 19:10:37.944140  368884 cni.go:84] Creating CNI manager for ""
	I1014 19:10:37.944207  368884 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1014 19:10:37.944219  368884 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1014 19:10:37.944285  368884 start.go:349] cluster config:
	{Name:download-only-480467 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:6144 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:download-only-480467 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1014 19:10:37.944449  368884 iso.go:125] acquiring lock: {Name:mk340b9c7aeea9c9c1eaad0b6560c7e40df5ab00 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1014 19:10:37.946188  368884 out.go:99] Starting "download-only-480467" primary control-plane node in "download-only-480467" cluster
	I1014 19:10:37.946213  368884 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1014 19:10:38.041134  368884 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.1/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1014 19:10:38.041170  368884 cache.go:58] Caching tarball of preloaded images
	I1014 19:10:38.041374  368884 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1014 19:10:38.043341  368884 out.go:99] Downloading Kubernetes v1.34.1 preload ...
	I1014 19:10:38.043371  368884 preload.go:313] getting checksum for preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 from gcs api...
	I1014 19:10:38.142883  368884 preload.go:290] Got checksum from GCS API "d1a46823b9241c5d38b5e0866197f2a8"
	I1014 19:10:38.142954  368884 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.1/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4?checksum=md5:d1a46823b9241c5d38b5e0866197f2a8 -> /home/jenkins/minikube-integration/21409-364627/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	
	
	* The control-plane node download-only-480467 host does not exist
	  To start a cluster, run: "minikube start -p download-only-480467"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.34.1/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/DeleteAll (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.34.1/DeleteAll (0.15s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-480467
--- PASS: TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestBinaryMirror (0.66s)

                                                
                                                
=== RUN   TestBinaryMirror
I1014 19:10:49.780064  368634 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:309: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-462626 --alsologtostderr --binary-mirror http://127.0.0.1:42741 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
helpers_test.go:175: Cleaning up "binary-mirror-462626" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-462626
--- PASS: TestBinaryMirror (0.66s)

                                                
                                    
x
+
TestOffline (87.58s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-270302 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-270302 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m26.519357466s)
helpers_test.go:175: Cleaning up "offline-crio-270302" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-270302
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-crio-270302: (1.061190193s)
--- PASS: TestOffline (87.58s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1000: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-082251
addons_test.go:1000: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-082251: exit status 85 (58.316706ms)

                                                
                                                
-- stdout --
	* Profile "addons-082251" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-082251"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1011: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-082251
addons_test.go:1011: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-082251: exit status 85 (59.012974ms)

                                                
                                                
-- stdout --
	* Profile "addons-082251" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-082251"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/Setup (155.39s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:108: (dbg) Run:  out/minikube-linux-amd64 start -p addons-082251 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:108: (dbg) Done: out/minikube-linux-amd64 start -p addons-082251 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (2m35.386810894s)
--- PASS: TestAddons/Setup (155.39s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.16s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:630: (dbg) Run:  kubectl --context addons-082251 create ns new-namespace
addons_test.go:644: (dbg) Run:  kubectl --context addons-082251 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.16s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (10.54s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:675: (dbg) Run:  kubectl --context addons-082251 create -f testdata/busybox.yaml
addons_test.go:682: (dbg) Run:  kubectl --context addons-082251 create sa gcp-auth-test
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [82e239e4-46e4-4a5b-913a-74be19e087ce] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [82e239e4-46e4-4a5b-913a-74be19e087ce] Running
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 10.004783054s
addons_test.go:694: (dbg) Run:  kubectl --context addons-082251 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:706: (dbg) Run:  kubectl --context addons-082251 describe sa gcp-auth-test
addons_test.go:744: (dbg) Run:  kubectl --context addons-082251 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (10.54s)

                                                
                                    
x
+
TestAddons/parallel/Registry (18.13s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:382: registry stabilized in 8.519597ms
addons_test.go:384: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-6b586f9694-wwf86" [ad5c8d48-73fd-4a58-bb4a-7aa0b51956fe] Running
addons_test.go:384: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.007390233s
addons_test.go:387: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-proxy-xqw5q" [b201e8db-0b5b-4101-8b24-9c1cf511c81b] Running
addons_test.go:387: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 6.003897207s
addons_test.go:392: (dbg) Run:  kubectl --context addons-082251 delete po -l run=registry-test --now
addons_test.go:397: (dbg) Run:  kubectl --context addons-082251 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:397: (dbg) Done: kubectl --context addons-082251 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (5.950961972s)
addons_test.go:411: (dbg) Run:  out/minikube-linux-amd64 -p addons-082251 ip
2025/10/14 19:14:02 [DEBUG] GET http://192.168.39.214:5000
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-082251 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (18.13s)

                                                
                                    
x
+
TestAddons/parallel/RegistryCreds (0.82s)

                                                
                                                
=== RUN   TestAddons/parallel/RegistryCreds
=== PAUSE TestAddons/parallel/RegistryCreds

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/RegistryCreds
addons_test.go:323: registry-creds stabilized in 4.57843ms
addons_test.go:325: (dbg) Run:  out/minikube-linux-amd64 addons configure registry-creds -f ./testdata/addons_testconfig.json -p addons-082251
addons_test.go:332: (dbg) Run:  kubectl --context addons-082251 -n kube-system get secret -o yaml
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-082251 addons disable registry-creds --alsologtostderr -v=1
--- PASS: TestAddons/parallel/RegistryCreds (0.82s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (5.31s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:352: "gadget-hqcbg" [30556161-b0ef-471f-964f-a6eca37b15a1] Running
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.006355847s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-082251 addons disable inspektor-gadget --alsologtostderr -v=1
--- PASS: TestAddons/parallel/InspektorGadget (5.31s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (6.18s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:455: metrics-server stabilized in 8.52487ms
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:352: "metrics-server-85b7d694d7-8pqv5" [ca2ee05c-08b9-4d0e-b306-0fc54ab16eb0] Running
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.009062857s
addons_test.go:463: (dbg) Run:  kubectl --context addons-082251 top pods -n kube-system
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-082251 addons disable metrics-server --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-082251 addons disable metrics-server --alsologtostderr -v=1: (1.065902035s)
--- PASS: TestAddons/parallel/MetricsServer (6.18s)

                                                
                                    
x
+
TestAddons/parallel/CSI (59.19s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I1014 19:14:06.058479  368634 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I1014 19:14:06.064200  368634 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1014 19:14:06.064234  368634 kapi.go:107] duration metric: took 5.774207ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:549: csi-hostpath-driver pods stabilized in 5.789963ms
addons_test.go:552: (dbg) Run:  kubectl --context addons-082251 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:557: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-082251 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-082251 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-082251 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-082251 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-082251 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-082251 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-082251 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-082251 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-082251 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-082251 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:562: (dbg) Run:  kubectl --context addons-082251 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:567: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:352: "task-pv-pod" [7e8b766a-c2f3-4cf2-96bb-1415b44dcc59] Pending
helpers_test.go:352: "task-pv-pod" [7e8b766a-c2f3-4cf2-96bb-1415b44dcc59] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod" [7e8b766a-c2f3-4cf2-96bb-1415b44dcc59] Running
addons_test.go:567: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 13.00550674s
addons_test.go:572: (dbg) Run:  kubectl --context addons-082251 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:577: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:427: (dbg) Run:  kubectl --context addons-082251 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:435: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: 
helpers_test.go:427: (dbg) Run:  kubectl --context addons-082251 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:582: (dbg) Run:  kubectl --context addons-082251 delete pod task-pv-pod
addons_test.go:582: (dbg) Done: kubectl --context addons-082251 delete pod task-pv-pod: (1.210556756s)
addons_test.go:588: (dbg) Run:  kubectl --context addons-082251 delete pvc hpvc
addons_test.go:594: (dbg) Run:  kubectl --context addons-082251 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:599: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-082251 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-082251 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-082251 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-082251 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-082251 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-082251 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-082251 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-082251 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-082251 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-082251 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-082251 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-082251 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-082251 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-082251 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-082251 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-082251 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-082251 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-082251 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-082251 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:604: (dbg) Run:  kubectl --context addons-082251 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:609: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:352: "task-pv-pod-restore" [d0410053-db41-428d-ba1d-2999756ef090] Pending
helpers_test.go:352: "task-pv-pod-restore" [d0410053-db41-428d-ba1d-2999756ef090] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod-restore" [d0410053-db41-428d-ba1d-2999756ef090] Running
addons_test.go:609: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.00416333s
addons_test.go:614: (dbg) Run:  kubectl --context addons-082251 delete pod task-pv-pod-restore
addons_test.go:618: (dbg) Run:  kubectl --context addons-082251 delete pvc hpvc-restore
addons_test.go:622: (dbg) Run:  kubectl --context addons-082251 delete volumesnapshot new-snapshot-demo
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-082251 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-082251 addons disable volumesnapshots --alsologtostderr -v=1: (1.048123486s)
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-082251 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-082251 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.930615119s)
--- PASS: TestAddons/parallel/CSI (59.19s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (20.77s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:808: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-082251 --alsologtostderr -v=1
addons_test.go:813: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:352: "headlamp-6945c6f4d-vkq65" [ad961239-cbd9-4836-8355-f5722d847f06] Pending
helpers_test.go:352: "headlamp-6945c6f4d-vkq65" [ad961239-cbd9-4836-8355-f5722d847f06] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:352: "headlamp-6945c6f4d-vkq65" [ad961239-cbd9-4836-8355-f5722d847f06] Running
addons_test.go:813: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 14.011093894s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-082251 addons disable headlamp --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-082251 addons disable headlamp --alsologtostderr -v=1: (5.880260131s)
--- PASS: TestAddons/parallel/Headlamp (20.77s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (6.61s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:352: "cloud-spanner-emulator-86bd5cbb97-488rs" [3bfc8984-9e8a-4e8a-bab0-05cf7340658c] Running
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.003154969s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-082251 addons disable cloud-spanner --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CloudSpanner (6.61s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (61.03s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:949: (dbg) Run:  kubectl --context addons-082251 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:955: (dbg) Run:  kubectl --context addons-082251 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:959: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-082251 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-082251 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-082251 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-082251 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-082251 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-082251 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-082251 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-082251 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-082251 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:352: "test-local-path" [070d011d-08d7-483f-9277-43d835505377] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "test-local-path" [070d011d-08d7-483f-9277-43d835505377] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "test-local-path" [070d011d-08d7-483f-9277-43d835505377] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 9.004640249s
addons_test.go:967: (dbg) Run:  kubectl --context addons-082251 get pvc test-pvc -o=json
addons_test.go:976: (dbg) Run:  out/minikube-linux-amd64 -p addons-082251 ssh "cat /opt/local-path-provisioner/pvc-499359fe-7f53-4caf-9df4-794032febc47_default_test-pvc/file1"
addons_test.go:988: (dbg) Run:  kubectl --context addons-082251 delete pod test-local-path
addons_test.go:992: (dbg) Run:  kubectl --context addons-082251 delete pvc test-pvc
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-082251 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-082251 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.115011621s)
--- PASS: TestAddons/parallel/LocalPath (61.03s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.8s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:352: "nvidia-device-plugin-daemonset-r6zsz" [dab834be-d432-4ec8-bbba-8cdbd68df25c] Running
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.012521519s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-082251 addons disable nvidia-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.80s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (12.53s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:352: "yakd-dashboard-5ff678cb9-kzb88" [7854246d-dbe4-4bb5-ae6a-5822d2b98595] Running
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.005940178s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-082251 addons disable yakd --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-082251 addons disable yakd --alsologtostderr -v=1: (6.527900676s)
--- PASS: TestAddons/parallel/Yakd (12.53s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (86.01s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-082251
addons_test.go:172: (dbg) Done: out/minikube-linux-amd64 stop -p addons-082251: (1m25.709302318s)
addons_test.go:176: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-082251
addons_test.go:180: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-082251
addons_test.go:185: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-082251
--- PASS: TestAddons/StoppedEnableDisable (86.01s)

                                                
                                    
x
+
TestCertOptions (85.57s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-884341 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
E1014 20:08:09.664934  368634 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/addons-082251/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-884341 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m23.875769355s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-884341 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-884341 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-884341 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-884341" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-884341
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-884341: (1.182293965s)
--- PASS: TestCertOptions (85.57s)

                                                
                                    
x
+
TestCertExpiration (290.79s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-827241 --memory=3072 --cert-expiration=3m --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-827241 --memory=3072 --cert-expiration=3m --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m7.422004894s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-827241 --memory=3072 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-827241 --memory=3072 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (42.468933839s)
helpers_test.go:175: Cleaning up "cert-expiration-827241" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-827241
--- PASS: TestCertExpiration (290.79s)

                                                
                                    
x
+
TestForceSystemdFlag (66.77s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-140067 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-140067 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m5.53043563s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-140067 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-140067" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-140067
--- PASS: TestForceSystemdFlag (66.77s)

                                                
                                    
x
+
TestForceSystemdEnv (65.96s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-702842 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-702842 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m5.088955272s)
helpers_test.go:175: Cleaning up "force-systemd-env-702842" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-702842
--- PASS: TestForceSystemdEnv (65.96s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (1.43s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
I1014 20:07:19.275429  368634 install.go:66] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1014 20:07:19.275610  368634 install.go:138] Validating docker-machine-driver-kvm2, PATH=/tmp/TestKVMDriverInstallOrUpdate3562903331/001:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
I1014 20:07:19.314488  368634 install.go:163] /tmp/TestKVMDriverInstallOrUpdate3562903331/001/docker-machine-driver-kvm2 version is 1.1.1
W1014 20:07:19.314540  368634 install.go:76] docker-machine-driver-kvm2: docker-machine-driver-kvm2 is version 1.1.1, want 1.37.0
W1014 20:07:19.314741  368634 out.go:176] [unset outFile]: * Downloading driver docker-machine-driver-kvm2:
I1014 20:07:19.314819  368634 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.37.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.37.0/docker-machine-driver-kvm2-amd64.sha256 -> /tmp/TestKVMDriverInstallOrUpdate3562903331/001/docker-machine-driver-kvm2
--- PASS: TestKVMDriverInstallOrUpdate (1.43s)

                                                
                                    
x
+
TestErrorSpam/setup (39.34s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-440224 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-440224 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
E1014 19:18:26.591503  368634 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/addons-082251/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1014 19:18:26.597894  368634 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/addons-082251/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1014 19:18:26.609360  368634 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/addons-082251/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1014 19:18:26.630771  368634 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/addons-082251/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1014 19:18:26.672272  368634 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/addons-082251/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1014 19:18:26.753860  368634 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/addons-082251/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1014 19:18:26.915488  368634 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/addons-082251/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1014 19:18:27.237237  368634 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/addons-082251/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1014 19:18:27.878896  368634 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/addons-082251/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1014 19:18:29.160494  368634 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/addons-082251/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1014 19:18:31.722207  368634 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/addons-082251/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1014 19:18:36.844647  368634 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/addons-082251/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-440224 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-440224 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (39.340183919s)
--- PASS: TestErrorSpam/setup (39.34s)

                                                
                                    
x
+
TestErrorSpam/start (0.37s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-440224 --log_dir /tmp/nospam-440224 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-440224 --log_dir /tmp/nospam-440224 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-440224 --log_dir /tmp/nospam-440224 start --dry-run
--- PASS: TestErrorSpam/start (0.37s)

                                                
                                    
x
+
TestErrorSpam/status (0.79s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-440224 --log_dir /tmp/nospam-440224 status
E1014 19:18:47.086476  368634 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/addons-082251/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-440224 --log_dir /tmp/nospam-440224 status
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-440224 --log_dir /tmp/nospam-440224 status
--- PASS: TestErrorSpam/status (0.79s)

                                                
                                    
x
+
TestErrorSpam/pause (1.69s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-440224 --log_dir /tmp/nospam-440224 pause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-440224 --log_dir /tmp/nospam-440224 pause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-440224 --log_dir /tmp/nospam-440224 pause
--- PASS: TestErrorSpam/pause (1.69s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.82s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-440224 --log_dir /tmp/nospam-440224 unpause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-440224 --log_dir /tmp/nospam-440224 unpause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-440224 --log_dir /tmp/nospam-440224 unpause
--- PASS: TestErrorSpam/unpause (1.82s)

                                                
                                    
x
+
TestErrorSpam/stop (5.09s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-440224 --log_dir /tmp/nospam-440224 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-amd64 -p nospam-440224 --log_dir /tmp/nospam-440224 stop: (1.916048898s)
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-440224 --log_dir /tmp/nospam-440224 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-amd64 -p nospam-440224 --log_dir /tmp/nospam-440224 stop: (1.854348506s)
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-440224 --log_dir /tmp/nospam-440224 stop
error_spam_test.go:172: (dbg) Done: out/minikube-linux-amd64 -p nospam-440224 --log_dir /tmp/nospam-440224 stop: (1.317011779s)
--- PASS: TestErrorSpam/stop (5.09s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/21409-364627/.minikube/files/etc/test/nested/copy/368634/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (53.1s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-amd64 start -p functional-416610 --memory=4096 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
E1014 19:19:07.568300  368634 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/addons-082251/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1014 19:19:48.529889  368634 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/addons-082251/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:2239: (dbg) Done: out/minikube-linux-amd64 start -p functional-416610 --memory=4096 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (53.098261383s)
--- PASS: TestFunctional/serial/StartWithProxy (53.10s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (39.46s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1014 19:19:49.894715  368634 config.go:182] Loaded profile config "functional-416610": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
functional_test.go:674: (dbg) Run:  out/minikube-linux-amd64 start -p functional-416610 --alsologtostderr -v=8
functional_test.go:674: (dbg) Done: out/minikube-linux-amd64 start -p functional-416610 --alsologtostderr -v=8: (39.455228486s)
functional_test.go:678: soft start took 39.455983442s for "functional-416610" cluster.
I1014 19:20:29.350384  368634 config.go:182] Loaded profile config "functional-416610": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestFunctional/serial/SoftStart (39.46s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-416610 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.10s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.47s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-416610 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-416610 cache add registry.k8s.io/pause:3.1: (1.075391809s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-416610 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-416610 cache add registry.k8s.io/pause:3.3: (1.242427676s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-416610 cache add registry.k8s.io/pause:latest
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-416610 cache add registry.k8s.io/pause:latest: (1.147927143s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.47s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (2.15s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-416610 /tmp/TestFunctionalserialCacheCmdcacheadd_local2397398232/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-amd64 -p functional-416610 cache add minikube-local-cache-test:functional-416610
functional_test.go:1104: (dbg) Done: out/minikube-linux-amd64 -p functional-416610 cache add minikube-local-cache-test:functional-416610: (1.815154529s)
functional_test.go:1109: (dbg) Run:  out/minikube-linux-amd64 -p functional-416610 cache delete minikube-local-cache-test:functional-416610
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-416610
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (2.15s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.24s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-amd64 -p functional-416610 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.24s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.74s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-amd64 -p functional-416610 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 -p functional-416610 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-416610 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (221.984242ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-amd64 -p functional-416610 cache reload
functional_test.go:1173: (dbg) Done: out/minikube-linux-amd64 -p functional-416610 cache reload: (1.022674149s)
functional_test.go:1178: (dbg) Run:  out/minikube-linux-amd64 -p functional-416610 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.74s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.10s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-amd64 -p functional-416610 kubectl -- --context functional-416610 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-416610 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (37.69s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-amd64 start -p functional-416610 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1014 19:21:10.454539  368634 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/addons-082251/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:772: (dbg) Done: out/minikube-linux-amd64 start -p functional-416610 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (37.691682733s)
functional_test.go:776: restart took 37.691849926s for "functional-416610" cluster.
I1014 19:21:15.212615  368634 config.go:182] Loaded profile config "functional-416610": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestFunctional/serial/ExtraConfig (37.69s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-416610 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.5s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-amd64 -p functional-416610 logs
functional_test.go:1251: (dbg) Done: out/minikube-linux-amd64 -p functional-416610 logs: (1.499026219s)
--- PASS: TestFunctional/serial/LogsCmd (1.50s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.48s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-amd64 -p functional-416610 logs --file /tmp/TestFunctionalserialLogsFileCmd259223247/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-amd64 -p functional-416610 logs --file /tmp/TestFunctionalserialLogsFileCmd259223247/001/logs.txt: (1.481394193s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.48s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (3.9s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-416610 apply -f testdata/invalidsvc.yaml
functional_test.go:2340: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-416610
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-416610: exit status 115 (293.614746ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬─────────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │             URL             │
	├───────────┼─────────────┼─────────────┼─────────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.39.139:32539 │
	└───────────┴─────────────┴─────────────┴─────────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-416610 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (3.90s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-416610 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-416610 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-416610 config get cpus: exit status 14 (51.749574ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-416610 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-416610 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-416610 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-416610 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-416610 config get cpus: exit status 14 (58.775679ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (17.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-416610 --alsologtostderr -v=1]
functional_test.go:925: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-416610 --alsologtostderr -v=1] ...
helpers_test.go:525: unable to kill pid 376506: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (17.73s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-amd64 start -p functional-416610 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-416610 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: exit status 23 (152.30391ms)

                                                
                                                
-- stdout --
	* [functional-416610] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21409
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21409-364627/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21409-364627/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1014 19:21:25.009748  376323 out.go:360] Setting OutFile to fd 1 ...
	I1014 19:21:25.010050  376323 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1014 19:21:25.010063  376323 out.go:374] Setting ErrFile to fd 2...
	I1014 19:21:25.010070  376323 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1014 19:21:25.010342  376323 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-364627/.minikube/bin
	I1014 19:21:25.010867  376323 out.go:368] Setting JSON to false
	I1014 19:21:25.012156  376323 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":3828,"bootTime":1760465857,"procs":241,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1014 19:21:25.012257  376323 start.go:141] virtualization: kvm guest
	I1014 19:21:25.013728  376323 out.go:179] * [functional-416610] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1014 19:21:25.015261  376323 out.go:179]   - MINIKUBE_LOCATION=21409
	I1014 19:21:25.015264  376323 notify.go:220] Checking for updates...
	I1014 19:21:25.017573  376323 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1014 19:21:25.018822  376323 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21409-364627/kubeconfig
	I1014 19:21:25.019937  376323 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21409-364627/.minikube
	I1014 19:21:25.021225  376323 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1014 19:21:25.022515  376323 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1014 19:21:25.024347  376323 config.go:182] Loaded profile config "functional-416610": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1014 19:21:25.024980  376323 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 19:21:25.025071  376323 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 19:21:25.041153  376323 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45121
	I1014 19:21:25.041799  376323 main.go:141] libmachine: () Calling .GetVersion
	I1014 19:21:25.042477  376323 main.go:141] libmachine: Using API Version  1
	I1014 19:21:25.042508  376323 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 19:21:25.043009  376323 main.go:141] libmachine: () Calling .GetMachineName
	I1014 19:21:25.043236  376323 main.go:141] libmachine: (functional-416610) Calling .DriverName
	I1014 19:21:25.043561  376323 driver.go:421] Setting default libvirt URI to qemu:///system
	I1014 19:21:25.044035  376323 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 19:21:25.044092  376323 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 19:21:25.059778  376323 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44457
	I1014 19:21:25.060217  376323 main.go:141] libmachine: () Calling .GetVersion
	I1014 19:21:25.060727  376323 main.go:141] libmachine: Using API Version  1
	I1014 19:21:25.060752  376323 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 19:21:25.061130  376323 main.go:141] libmachine: () Calling .GetMachineName
	I1014 19:21:25.061355  376323 main.go:141] libmachine: (functional-416610) Calling .DriverName
	I1014 19:21:25.096214  376323 out.go:179] * Using the kvm2 driver based on existing profile
	I1014 19:21:25.097292  376323 start.go:305] selected driver: kvm2
	I1014 19:21:25.097324  376323 start.go:925] validating driver "kvm2" against &{Name:functional-416610 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.1 ClusterName:functional-416610 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.139 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mo
untString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1014 19:21:25.097462  376323 start.go:936] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1014 19:21:25.099643  376323 out.go:203] 
	W1014 19:21:25.100931  376323 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1014 19:21:25.102112  376323 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-amd64 start -p functional-416610 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
--- PASS: TestFunctional/parallel/DryRun (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-amd64 start -p functional-416610 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-416610 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: exit status 23 (150.874345ms)

                                                
                                                
-- stdout --
	* [functional-416610] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21409
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21409-364627/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21409-364627/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1014 19:21:24.857278  376276 out.go:360] Setting OutFile to fd 1 ...
	I1014 19:21:24.857429  376276 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1014 19:21:24.857440  376276 out.go:374] Setting ErrFile to fd 2...
	I1014 19:21:24.857447  376276 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1014 19:21:24.857806  376276 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-364627/.minikube/bin
	I1014 19:21:24.858353  376276 out.go:368] Setting JSON to false
	I1014 19:21:24.859544  376276 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":3828,"bootTime":1760465857,"procs":238,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1014 19:21:24.859647  376276 start.go:141] virtualization: kvm guest
	I1014 19:21:24.862130  376276 out.go:179] * [functional-416610] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I1014 19:21:24.863650  376276 out.go:179]   - MINIKUBE_LOCATION=21409
	I1014 19:21:24.863630  376276 notify.go:220] Checking for updates...
	I1014 19:21:24.865089  376276 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1014 19:21:24.866758  376276 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21409-364627/kubeconfig
	I1014 19:21:24.868168  376276 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21409-364627/.minikube
	I1014 19:21:24.869349  376276 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1014 19:21:24.870557  376276 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1014 19:21:24.872162  376276 config.go:182] Loaded profile config "functional-416610": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1014 19:21:24.872696  376276 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 19:21:24.872786  376276 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 19:21:24.888028  376276 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34467
	I1014 19:21:24.888603  376276 main.go:141] libmachine: () Calling .GetVersion
	I1014 19:21:24.889184  376276 main.go:141] libmachine: Using API Version  1
	I1014 19:21:24.889210  376276 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 19:21:24.889708  376276 main.go:141] libmachine: () Calling .GetMachineName
	I1014 19:21:24.889937  376276 main.go:141] libmachine: (functional-416610) Calling .DriverName
	I1014 19:21:24.890217  376276 driver.go:421] Setting default libvirt URI to qemu:///system
	I1014 19:21:24.890719  376276 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 19:21:24.890776  376276 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 19:21:24.908519  376276 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34829
	I1014 19:21:24.909126  376276 main.go:141] libmachine: () Calling .GetVersion
	I1014 19:21:24.909719  376276 main.go:141] libmachine: Using API Version  1
	I1014 19:21:24.909759  376276 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 19:21:24.910124  376276 main.go:141] libmachine: () Calling .GetMachineName
	I1014 19:21:24.910342  376276 main.go:141] libmachine: (functional-416610) Calling .DriverName
	I1014 19:21:24.944225  376276 out.go:179] * Utilisation du pilote kvm2 basé sur le profil existant
	I1014 19:21:24.945579  376276 start.go:305] selected driver: kvm2
	I1014 19:21:24.945597  376276 start.go:925] validating driver "kvm2" against &{Name:functional-416610 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.1 ClusterName:functional-416610 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.139 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mo
untString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1014 19:21:24.945702  376276 start.go:936] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1014 19:21:24.947612  376276 out.go:203] 
	W1014 19:21:24.949118  376276 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1014 19:21:24.950207  376276 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.91s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-amd64 -p functional-416610 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-amd64 -p functional-416610 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-amd64 -p functional-416610 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.91s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (22.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-416610 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-416610 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:352: "hello-node-connect-7d85dfc575-685gz" [3e23840e-4130-42ab-9a14-869becabc807] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:352: "hello-node-connect-7d85dfc575-685gz" [3e23840e-4130-42ab-9a14-869becabc807] Running
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 22.003794664s
functional_test.go:1654: (dbg) Run:  out/minikube-linux-amd64 -p functional-416610 service hello-node-connect --url
functional_test.go:1660: found endpoint for hello-node-connect: http://192.168.39.139:32755
functional_test.go:1680: http://192.168.39.139:32755: success! body:
Request served by hello-node-connect-7d85dfc575-685gz

                                                
                                                
HTTP/1.1 GET /

                                                
                                                
Host: 192.168.39.139:32755
Accept-Encoding: gzip
User-Agent: Go-http-client/1.1
--- PASS: TestFunctional/parallel/ServiceCmdConnect (22.55s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-amd64 -p functional-416610 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-amd64 -p functional-416610 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (38.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:352: "storage-provisioner" [10712500-2bfb-4f07-9c64-215413dbba49] Running
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.004237177s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-416610 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-416610 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-416610 get pvc myclaim -o=json
I1014 19:21:41.332831  368634 retry.go:31] will retry after 2.982199882s: testpvc phase = "Pending", want "Bound" (msg={TypeMeta:{Kind:PersistentVolumeClaim APIVersion:v1} ObjectMeta:{Name:myclaim GenerateName: Namespace:default SelfLink: UID:0c6285e3-44e0-4b9e-8efd-29fa9708c429 ResourceVersion:830 Generation:0 CreationTimestamp:2025-10-14 19:21:41 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[] Annotations:map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["ReadWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] OwnerReferences:[] Finalizers:[kubernetes.io/pvc-protection] ManagedFields:[]} Spec:{AccessModes:[ReadWriteOnce] Selector:nil Resources:{Limits:map[] Requests:map[storage:{i:{value:524288000 scale:0} d:{Dec:<nil>} s:500Mi Format:BinarySI}]} VolumeName: StorageClassName:0xc001f400f0 VolumeMode:0xc001f40100 DataSource:nil DataSourceRef:nil VolumeAttributesClassName:<nil>} Status:{Phase:Pending AccessModes:[] Capacity:map[] Conditions:[] AllocatedResources:map[] AllocatedResourceStatuses:map[] CurrentVolumeAttributesClassName:<nil> ModifyVolumeStatus:nil}})
2025/10/14 19:21:42 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-416610 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-416610 apply -f testdata/storage-provisioner/pod.yaml
I1014 19:21:44.771769  368634 detect.go:223] nested VM detected
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [9629d0f5-7319-4e07-9c7f-7bbeaf873dab] Pending
helpers_test.go:352: "sp-pod" [9629d0f5-7319-4e07-9c7f-7bbeaf873dab] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [9629d0f5-7319-4e07-9c7f-7bbeaf873dab] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 20.004872994s
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-416610 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:112: (dbg) Run:  kubectl --context functional-416610 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:112: (dbg) Done: kubectl --context functional-416610 delete -f testdata/storage-provisioner/pod.yaml: (1.085130804s)
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-416610 apply -f testdata/storage-provisioner/pod.yaml
I1014 19:22:06.124134  368634 detect.go:223] nested VM detected
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [d38ed114-70c9-4e98-83a9-d1965e254189] Pending
helpers_test.go:352: "sp-pod" [d38ed114-70c9-4e98-83a9-d1965e254189] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [d38ed114-70c9-4e98-83a9-d1965e254189] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 8.004779404s
functional_test_pvc_test.go:120: (dbg) Run:  kubectl --context functional-416610 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (38.29s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-amd64 -p functional-416610 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-amd64 -p functional-416610 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-416610 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-416610 ssh -n functional-416610 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-416610 cp functional-416610:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd3130391073/001/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-416610 ssh -n functional-416610 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-416610 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-416610 ssh -n functional-416610 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.51s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (28.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1798: (dbg) Run:  kubectl --context functional-416610 replace --force -f testdata/mysql.yaml
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:352: "mysql-5bb876957f-s9bzd" [8004f152-16c7-4ef1-9eac-a9b38d802caa] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:352: "mysql-5bb876957f-s9bzd" [8004f152-16c7-4ef1-9eac-a9b38d802caa] Running
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 26.261487763s
functional_test.go:1812: (dbg) Run:  kubectl --context functional-416610 exec mysql-5bb876957f-s9bzd -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-416610 exec mysql-5bb876957f-s9bzd -- mysql -ppassword -e "show databases;": exit status 1 (150.788346ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1014 19:21:58.011124  368634 retry.go:31] will retry after 1.39402304s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-416610 exec mysql-5bb876957f-s9bzd -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (28.17s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/368634/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-amd64 -p functional-416610 ssh "sudo cat /etc/test/nested/copy/368634/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/368634.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-416610 ssh "sudo cat /etc/ssl/certs/368634.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/368634.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-416610 ssh "sudo cat /usr/share/ca-certificates/368634.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-416610 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3686342.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-416610 ssh "sudo cat /etc/ssl/certs/3686342.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/3686342.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-416610 ssh "sudo cat /usr/share/ca-certificates/3686342.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-416610 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.36s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-416610 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-416610 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-416610 ssh "sudo systemctl is-active docker": exit status 1 (233.268177ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-416610 ssh "sudo systemctl is-active containerd"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-416610 ssh "sudo systemctl is-active containerd": exit status 1 (221.859241ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-amd64 -p functional-416610 version --short
--- PASS: TestFunctional/parallel/Version/short (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.76s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-amd64 -p functional-416610 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.76s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-416610 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-416610 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────┬────────────────────┬───────────────┬────────┐
│                  IMAGE                  │        TAG         │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────┼────────────────────┼───────────────┼────────┤
│ registry.k8s.io/coredns/coredns         │ v1.12.1            │ 52546a367cc9e │ 76.1MB │
│ registry.k8s.io/kube-apiserver          │ v1.34.1            │ c3994bc696102 │ 89MB   │
│ registry.k8s.io/pause                   │ 3.1                │ da86e6ba6ca19 │ 747kB  │
│ registry.k8s.io/etcd                    │ 3.6.4-0            │ 5f1f5298c888d │ 196MB  │
│ gcr.io/k8s-minikube/busybox             │ latest             │ beae173ccac6a │ 1.46MB │
│ localhost/my-image                      │ functional-416610  │ 99742baced007 │ 1.47MB │
│ registry.k8s.io/kube-controller-manager │ v1.34.1            │ c80c8dbafe7dd │ 76MB   │
│ docker.io/kindest/kindnetd              │ v20250512-df8de77b │ 409467f978b4a │ 109MB  │
│ registry.k8s.io/kube-proxy              │ v1.34.1            │ fc25172553d79 │ 73.1MB │
│ registry.k8s.io/kube-scheduler          │ v1.34.1            │ 7dd6aaa1717ab │ 53.8MB │
│ registry.k8s.io/pause                   │ 3.10.1             │ cd073f4c5f6a8 │ 742kB  │
│ registry.k8s.io/pause                   │ 3.3                │ 0184c1613d929 │ 686kB  │
│ registry.k8s.io/pause                   │ latest             │ 350b164e7ae1d │ 247kB  │
│ docker.io/kicbase/echo-server           │ latest             │ 9056ab77afb8e │ 4.95MB │
│ localhost/kicbase/echo-server           │ functional-416610  │ 9056ab77afb8e │ 4.95MB │
│ docker.io/library/mysql                 │ 5.7                │ 5107333e08a87 │ 520MB  │
│ gcr.io/k8s-minikube/busybox             │ 1.28.4-glibc       │ 56cc512116c8f │ 4.63MB │
│ gcr.io/k8s-minikube/storage-provisioner │ v5                 │ 6e38f40d628db │ 31.5MB │
│ localhost/minikube-local-cache-test     │ functional-416610  │ 28e4a68ab0e37 │ 3.33kB │
└─────────────────────────────────────────┴────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-416610 image ls --format table --alsologtostderr:
I1014 19:21:52.746390  378066 out.go:360] Setting OutFile to fd 1 ...
I1014 19:21:52.746517  378066 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1014 19:21:52.746526  378066 out.go:374] Setting ErrFile to fd 2...
I1014 19:21:52.746530  378066 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1014 19:21:52.746741  378066 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-364627/.minikube/bin
I1014 19:21:52.747295  378066 config.go:182] Loaded profile config "functional-416610": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1014 19:21:52.747415  378066 config.go:182] Loaded profile config "functional-416610": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1014 19:21:52.747826  378066 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1014 19:21:52.747886  378066 main.go:141] libmachine: Launching plugin server for driver kvm2
I1014 19:21:52.761497  378066 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38027
I1014 19:21:52.762019  378066 main.go:141] libmachine: () Calling .GetVersion
I1014 19:21:52.762661  378066 main.go:141] libmachine: Using API Version  1
I1014 19:21:52.762696  378066 main.go:141] libmachine: () Calling .SetConfigRaw
I1014 19:21:52.763121  378066 main.go:141] libmachine: () Calling .GetMachineName
I1014 19:21:52.763386  378066 main.go:141] libmachine: (functional-416610) Calling .GetState
I1014 19:21:52.765436  378066 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1014 19:21:52.765486  378066 main.go:141] libmachine: Launching plugin server for driver kvm2
I1014 19:21:52.779504  378066 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45763
I1014 19:21:52.779969  378066 main.go:141] libmachine: () Calling .GetVersion
I1014 19:21:52.780499  378066 main.go:141] libmachine: Using API Version  1
I1014 19:21:52.780523  378066 main.go:141] libmachine: () Calling .SetConfigRaw
I1014 19:21:52.780946  378066 main.go:141] libmachine: () Calling .GetMachineName
I1014 19:21:52.781156  378066 main.go:141] libmachine: (functional-416610) Calling .DriverName
I1014 19:21:52.781409  378066 ssh_runner.go:195] Run: systemctl --version
I1014 19:21:52.781434  378066 main.go:141] libmachine: (functional-416610) Calling .GetSSHHostname
I1014 19:21:52.784358  378066 main.go:141] libmachine: (functional-416610) DBG | domain functional-416610 has defined MAC address 52:54:00:08:b4:93 in network mk-functional-416610
I1014 19:21:52.784858  378066 main.go:141] libmachine: (functional-416610) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:08:b4:93", ip: ""} in network mk-functional-416610: {Iface:virbr1 ExpiryTime:2025-10-14 20:19:11 +0000 UTC Type:0 Mac:52:54:00:08:b4:93 Iaid: IPaddr:192.168.39.139 Prefix:24 Hostname:functional-416610 Clientid:01:52:54:00:08:b4:93}
I1014 19:21:52.784897  378066 main.go:141] libmachine: (functional-416610) DBG | domain functional-416610 has defined IP address 192.168.39.139 and MAC address 52:54:00:08:b4:93 in network mk-functional-416610
I1014 19:21:52.785097  378066 main.go:141] libmachine: (functional-416610) Calling .GetSSHPort
I1014 19:21:52.785270  378066 main.go:141] libmachine: (functional-416610) Calling .GetSSHKeyPath
I1014 19:21:52.785441  378066 main.go:141] libmachine: (functional-416610) Calling .GetSSHUsername
I1014 19:21:52.785596  378066 sshutil.go:53] new ssh client: &{IP:192.168.39.139 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21409-364627/.minikube/machines/functional-416610/id_rsa Username:docker}
I1014 19:21:52.871059  378066 ssh_runner.go:195] Run: sudo crictl images --output json
I1014 19:21:52.909235  378066 main.go:141] libmachine: Making call to close driver server
I1014 19:21:52.909251  378066 main.go:141] libmachine: (functional-416610) Calling .Close
I1014 19:21:52.909554  378066 main.go:141] libmachine: Successfully made call to close driver server
I1014 19:21:52.909582  378066 main.go:141] libmachine: Making call to close connection to plugin binary
I1014 19:21:52.909591  378066 main.go:141] libmachine: (functional-416610) DBG | Closing plugin on server side
I1014 19:21:52.909597  378066 main.go:141] libmachine: Making call to close driver server
I1014 19:21:52.909647  378066 main.go:141] libmachine: (functional-416610) Calling .Close
I1014 19:21:52.909917  378066 main.go:141] libmachine: Successfully made call to close driver server
I1014 19:21:52.909950  378066 main.go:141] libmachine: (functional-416610) DBG | Closing plugin on server side
I1014 19:21:52.909964  378066 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-416610 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-416610 image ls --format json --alsologtostderr:
[{"id":"5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":["docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb","docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da"],"repoTags":["docker.io/library/mysql:5.7"],"size":"519571821"},{"id":"c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97","repoDigests":["registry.k8s.io/kube-apiserver@sha256:264da1e0ab552e24b2eb034a1b75745df78fe8903bade1fa0f874f9167dad964","registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902"],"repoTags":["registry.k8s.io/kube-apiserver:v1.34.1"],"size":"89046001"},{"id":"fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7","repoDigests":["registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a","registry.k8s.io/kube-proxy@sha256:9e876d245c76f0e3529c82bb103b60a59c4e190317827f977ab696cc4f43020a"],"rep
oTags":["registry.k8s.io/kube-proxy:v1.34.1"],"size":"73138073"},{"id":"7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813","repoDigests":["registry.k8s.io/kube-scheduler@sha256:47306e2178d9766fe3fe9eada02fa995f9f29dcbf518832293dfbe16964e2d31","registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500"],"repoTags":["registry.k8s.io/kube-scheduler:v1.34.1"],"size":"53844823"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"c6dab92f0372ff3df244853f209e9132d5a81994df0451bdb22dc0a85ea19b81","repoDigests":["docker.io/library/7726cb14c4b493606a6828d8b6343924ec6a702720c741903adc243ff586fd23-tmp@sha256:bd24660b02f044989615ef26bb3f635948fca6edbb89e0bae275955892fcd050"],"repoTags":[],"size":"1466018"},{"id":"99742baced007bf1c942d6d7e51b8852f3451f4143f4ec3d1326831
714ad737a","repoDigests":["localhost/my-image@sha256:f66aaad8177affe09ea7ff567ea9d50a9a66cca94497be33302d84569af29995"],"repoTags":["localhost/my-image:functional-416610"],"size":"1468599"},{"id":"5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115","repoDigests":["registry.k8s.io/etcd@sha256:71170330936954286be203a7737459f2838dd71cc79f8ffaac91548a9e079b8f","registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19"],"repoTags":["registry.k8s.io/etcd:3.6.4-0"],"size":"195976448"},{"id":"cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c","registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"742092"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb050
6e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029"],"repoTags":[],
"size":"249229937"},{"id":"beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee","gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b"],"repoTags":["gcr.io/k8s-minikube/busybox:latest"],"size":"1462480"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"28e4a68ab0e3736d6f9f9f12f9f61665d0bee6b376fbff4fc04d6618191e5f54","repoDigests":["localhost/minikube-local-cache-test@sha256:39e9e5eb2be6398f8b1aa02501c3151ad6e22bf311b2a25f02fa143c23f4ff63"],"repoTags":["localhost/minikube-local-cache-test:f
unctional-416610"],"size":"3330"},{"id":"52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969","repoDigests":["registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998","registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"],"repoTags":["registry.k8s.io/coredns/coredns:v1.12.1"],"size":"76103547"},{"id":"c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89","registry.k8s.io/kube-controller-manager@sha256:a6fe41965f1693c8a73ebe75e215d0b7c0902732c66c6692b0dbcfb0f077c992"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.34.1"],"size":"76004181"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":["docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6","docker.io/kicbase/echo-
server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86","docker.io/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf","localhost/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6","localhost/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86","localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf"],"repoTags":["docker.io/kicbase/echo-server:latest","localhost/kicbase/echo-server:functional-416610"],"size":"4945246"},{"id":"409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a","docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"109379124"},{"id":"115053965e86b2df4d78af78d795
1b8644839d20a03820c6df59a261103315f7","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a","docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"43824855"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-416610 image ls --format json --alsologtostderr:
I1014 19:21:52.513503  378042 out.go:360] Setting OutFile to fd 1 ...
I1014 19:21:52.513777  378042 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1014 19:21:52.513786  378042 out.go:374] Setting ErrFile to fd 2...
I1014 19:21:52.513791  378042 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1014 19:21:52.514043  378042 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-364627/.minikube/bin
I1014 19:21:52.514714  378042 config.go:182] Loaded profile config "functional-416610": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1014 19:21:52.514826  378042 config.go:182] Loaded profile config "functional-416610": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1014 19:21:52.515243  378042 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1014 19:21:52.515348  378042 main.go:141] libmachine: Launching plugin server for driver kvm2
I1014 19:21:52.529286  378042 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42691
I1014 19:21:52.529897  378042 main.go:141] libmachine: () Calling .GetVersion
I1014 19:21:52.530599  378042 main.go:141] libmachine: Using API Version  1
I1014 19:21:52.530626  378042 main.go:141] libmachine: () Calling .SetConfigRaw
I1014 19:21:52.531057  378042 main.go:141] libmachine: () Calling .GetMachineName
I1014 19:21:52.531272  378042 main.go:141] libmachine: (functional-416610) Calling .GetState
I1014 19:21:52.533429  378042 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1014 19:21:52.533492  378042 main.go:141] libmachine: Launching plugin server for driver kvm2
I1014 19:21:52.548444  378042 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36167
I1014 19:21:52.548890  378042 main.go:141] libmachine: () Calling .GetVersion
I1014 19:21:52.549368  378042 main.go:141] libmachine: Using API Version  1
I1014 19:21:52.549392  378042 main.go:141] libmachine: () Calling .SetConfigRaw
I1014 19:21:52.549750  378042 main.go:141] libmachine: () Calling .GetMachineName
I1014 19:21:52.549938  378042 main.go:141] libmachine: (functional-416610) Calling .DriverName
I1014 19:21:52.550145  378042 ssh_runner.go:195] Run: systemctl --version
I1014 19:21:52.550170  378042 main.go:141] libmachine: (functional-416610) Calling .GetSSHHostname
I1014 19:21:52.553137  378042 main.go:141] libmachine: (functional-416610) DBG | domain functional-416610 has defined MAC address 52:54:00:08:b4:93 in network mk-functional-416610
I1014 19:21:52.553603  378042 main.go:141] libmachine: (functional-416610) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:08:b4:93", ip: ""} in network mk-functional-416610: {Iface:virbr1 ExpiryTime:2025-10-14 20:19:11 +0000 UTC Type:0 Mac:52:54:00:08:b4:93 Iaid: IPaddr:192.168.39.139 Prefix:24 Hostname:functional-416610 Clientid:01:52:54:00:08:b4:93}
I1014 19:21:52.553635  378042 main.go:141] libmachine: (functional-416610) DBG | domain functional-416610 has defined IP address 192.168.39.139 and MAC address 52:54:00:08:b4:93 in network mk-functional-416610
I1014 19:21:52.553796  378042 main.go:141] libmachine: (functional-416610) Calling .GetSSHPort
I1014 19:21:52.553988  378042 main.go:141] libmachine: (functional-416610) Calling .GetSSHKeyPath
I1014 19:21:52.554132  378042 main.go:141] libmachine: (functional-416610) Calling .GetSSHUsername
I1014 19:21:52.554269  378042 sshutil.go:53] new ssh client: &{IP:192.168.39.139 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21409-364627/.minikube/machines/functional-416610/id_rsa Username:docker}
I1014 19:21:52.636782  378042 ssh_runner.go:195] Run: sudo crictl images --output json
I1014 19:21:52.691554  378042 main.go:141] libmachine: Making call to close driver server
I1014 19:21:52.691566  378042 main.go:141] libmachine: (functional-416610) Calling .Close
I1014 19:21:52.691985  378042 main.go:141] libmachine: Successfully made call to close driver server
I1014 19:21:52.692007  378042 main.go:141] libmachine: Making call to close connection to plugin binary
I1014 19:21:52.692016  378042 main.go:141] libmachine: Making call to close driver server
I1014 19:21:52.691984  378042 main.go:141] libmachine: (functional-416610) DBG | Closing plugin on server side
I1014 19:21:52.692024  378042 main.go:141] libmachine: (functional-416610) Calling .Close
I1014 19:21:52.692416  378042 main.go:141] libmachine: Successfully made call to close driver server
I1014 19:21:52.692437  378042 main.go:141] libmachine: Making call to close connection to plugin binary
I1014 19:21:52.692446  378042 main.go:141] libmachine: (functional-416610) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-416610 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-416610 image ls --format yaml --alsologtostderr:
- id: 7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:47306e2178d9766fe3fe9eada02fa995f9f29dcbf518832293dfbe16964e2d31
- registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500
repoTags:
- registry.k8s.io/kube-scheduler:v1.34.1
size: "53844823"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: 409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
- docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "109379124"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: 28e4a68ab0e3736d6f9f9f12f9f61665d0bee6b376fbff4fc04d6618191e5f54
repoDigests:
- localhost/minikube-local-cache-test@sha256:39e9e5eb2be6398f8b1aa02501c3151ad6e22bf311b2a25f02fa143c23f4ff63
repoTags:
- localhost/minikube-local-cache-test:functional-416610
size: "3330"
- id: 52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998
- registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c
repoTags:
- registry.k8s.io/coredns/coredns:v1.12.1
size: "76103547"
- id: c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89
- registry.k8s.io/kube-controller-manager@sha256:a6fe41965f1693c8a73ebe75e215d0b7c0902732c66c6692b0dbcfb0f077c992
repoTags:
- registry.k8s.io/kube-controller-manager:v1.34.1
size: "76004181"
- id: fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7
repoDigests:
- registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a
- registry.k8s.io/kube-proxy@sha256:9e876d245c76f0e3529c82bb103b60a59c4e190317827f977ab696cc4f43020a
repoTags:
- registry.k8s.io/kube-proxy:v1.34.1
size: "73138073"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests:
- docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6
- docker.io/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86
- docker.io/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
- localhost/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6
- localhost/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86
- localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
repoTags:
- docker.io/kicbase/echo-server:latest
- localhost/kicbase/echo-server:functional-416610
size: "4945246"
- id: c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:264da1e0ab552e24b2eb034a1b75745df78fe8903bade1fa0f874f9167dad964
- registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902
repoTags:
- registry.k8s.io/kube-apiserver:v1.34.1
size: "89046001"
- id: 07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029
repoTags: []
size: "249229937"
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "43824855"
- id: 5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115
repoDigests:
- registry.k8s.io/etcd@sha256:71170330936954286be203a7737459f2838dd71cc79f8ffaac91548a9e079b8f
- registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19
repoTags:
- registry.k8s.io/etcd:3.6.4-0
size: "195976448"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
- registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41
repoTags:
- registry.k8s.io/pause:3.10.1
size: "742092"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-416610 image ls --format yaml --alsologtostderr:
I1014 19:21:46.645259  377925 out.go:360] Setting OutFile to fd 1 ...
I1014 19:21:46.645586  377925 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1014 19:21:46.645597  377925 out.go:374] Setting ErrFile to fd 2...
I1014 19:21:46.645604  377925 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1014 19:21:46.645796  377925 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-364627/.minikube/bin
I1014 19:21:46.646424  377925 config.go:182] Loaded profile config "functional-416610": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1014 19:21:46.646551  377925 config.go:182] Loaded profile config "functional-416610": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1014 19:21:46.646938  377925 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1014 19:21:46.647004  377925 main.go:141] libmachine: Launching plugin server for driver kvm2
I1014 19:21:46.660750  377925 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46269
I1014 19:21:46.661304  377925 main.go:141] libmachine: () Calling .GetVersion
I1014 19:21:46.662047  377925 main.go:141] libmachine: Using API Version  1
I1014 19:21:46.662095  377925 main.go:141] libmachine: () Calling .SetConfigRaw
I1014 19:21:46.662565  377925 main.go:141] libmachine: () Calling .GetMachineName
I1014 19:21:46.662788  377925 main.go:141] libmachine: (functional-416610) Calling .GetState
I1014 19:21:46.665253  377925 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1014 19:21:46.665322  377925 main.go:141] libmachine: Launching plugin server for driver kvm2
I1014 19:21:46.679197  377925 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34139
I1014 19:21:46.679755  377925 main.go:141] libmachine: () Calling .GetVersion
I1014 19:21:46.680367  377925 main.go:141] libmachine: Using API Version  1
I1014 19:21:46.680397  377925 main.go:141] libmachine: () Calling .SetConfigRaw
I1014 19:21:46.680775  377925 main.go:141] libmachine: () Calling .GetMachineName
I1014 19:21:46.680995  377925 main.go:141] libmachine: (functional-416610) Calling .DriverName
I1014 19:21:46.681205  377925 ssh_runner.go:195] Run: systemctl --version
I1014 19:21:46.681231  377925 main.go:141] libmachine: (functional-416610) Calling .GetSSHHostname
I1014 19:21:46.684516  377925 main.go:141] libmachine: (functional-416610) DBG | domain functional-416610 has defined MAC address 52:54:00:08:b4:93 in network mk-functional-416610
I1014 19:21:46.685004  377925 main.go:141] libmachine: (functional-416610) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:08:b4:93", ip: ""} in network mk-functional-416610: {Iface:virbr1 ExpiryTime:2025-10-14 20:19:11 +0000 UTC Type:0 Mac:52:54:00:08:b4:93 Iaid: IPaddr:192.168.39.139 Prefix:24 Hostname:functional-416610 Clientid:01:52:54:00:08:b4:93}
I1014 19:21:46.685036  377925 main.go:141] libmachine: (functional-416610) DBG | domain functional-416610 has defined IP address 192.168.39.139 and MAC address 52:54:00:08:b4:93 in network mk-functional-416610
I1014 19:21:46.685224  377925 main.go:141] libmachine: (functional-416610) Calling .GetSSHPort
I1014 19:21:46.685451  377925 main.go:141] libmachine: (functional-416610) Calling .GetSSHKeyPath
I1014 19:21:46.685634  377925 main.go:141] libmachine: (functional-416610) Calling .GetSSHUsername
I1014 19:21:46.685782  377925 sshutil.go:53] new ssh client: &{IP:192.168.39.139 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21409-364627/.minikube/machines/functional-416610/id_rsa Username:docker}
I1014 19:21:46.782213  377925 ssh_runner.go:195] Run: sudo crictl images --output json
I1014 19:21:46.836213  377925 main.go:141] libmachine: Making call to close driver server
I1014 19:21:46.836234  377925 main.go:141] libmachine: (functional-416610) Calling .Close
I1014 19:21:46.836679  377925 main.go:141] libmachine: (functional-416610) DBG | Closing plugin on server side
I1014 19:21:46.836679  377925 main.go:141] libmachine: Successfully made call to close driver server
I1014 19:21:46.836709  377925 main.go:141] libmachine: Making call to close connection to plugin binary
I1014 19:21:46.836718  377925 main.go:141] libmachine: Making call to close driver server
I1014 19:21:46.836726  377925 main.go:141] libmachine: (functional-416610) Calling .Close
I1014 19:21:46.836970  377925 main.go:141] libmachine: Successfully made call to close driver server
I1014 19:21:46.836999  377925 main.go:141] libmachine: Making call to close connection to plugin binary
I1014 19:21:46.837020  377925 main.go:141] libmachine: (functional-416610) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (5.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-amd64 -p functional-416610 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-416610 ssh pgrep buildkitd: exit status 1 (213.605119ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-amd64 -p functional-416610 image build -t localhost/my-image:functional-416610 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-amd64 -p functional-416610 image build -t localhost/my-image:functional-416610 testdata/build --alsologtostderr: (5.176439081s)
functional_test.go:335: (dbg) Stdout: out/minikube-linux-amd64 -p functional-416610 image build -t localhost/my-image:functional-416610 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> c6dab92f037
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-416610
--> 99742baced0
Successfully tagged localhost/my-image:functional-416610
99742baced007bf1c942d6d7e51b8852f3451f4143f4ec3d1326831714ad737a
functional_test.go:338: (dbg) Stderr: out/minikube-linux-amd64 -p functional-416610 image build -t localhost/my-image:functional-416610 testdata/build --alsologtostderr:
I1014 19:21:47.102563  377978 out.go:360] Setting OutFile to fd 1 ...
I1014 19:21:47.102861  377978 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1014 19:21:47.102872  377978 out.go:374] Setting ErrFile to fd 2...
I1014 19:21:47.102878  377978 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1014 19:21:47.103094  377978 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-364627/.minikube/bin
I1014 19:21:47.103733  377978 config.go:182] Loaded profile config "functional-416610": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1014 19:21:47.104444  377978 config.go:182] Loaded profile config "functional-416610": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1014 19:21:47.104846  377978 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1014 19:21:47.104899  377978 main.go:141] libmachine: Launching plugin server for driver kvm2
I1014 19:21:47.119405  377978 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35953
I1014 19:21:47.119874  377978 main.go:141] libmachine: () Calling .GetVersion
I1014 19:21:47.120493  377978 main.go:141] libmachine: Using API Version  1
I1014 19:21:47.120520  377978 main.go:141] libmachine: () Calling .SetConfigRaw
I1014 19:21:47.120874  377978 main.go:141] libmachine: () Calling .GetMachineName
I1014 19:21:47.121140  377978 main.go:141] libmachine: (functional-416610) Calling .GetState
I1014 19:21:47.123273  377978 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1014 19:21:47.123377  377978 main.go:141] libmachine: Launching plugin server for driver kvm2
I1014 19:21:47.137277  377978 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33213
I1014 19:21:47.137832  377978 main.go:141] libmachine: () Calling .GetVersion
I1014 19:21:47.138355  377978 main.go:141] libmachine: Using API Version  1
I1014 19:21:47.138378  377978 main.go:141] libmachine: () Calling .SetConfigRaw
I1014 19:21:47.138740  377978 main.go:141] libmachine: () Calling .GetMachineName
I1014 19:21:47.138936  377978 main.go:141] libmachine: (functional-416610) Calling .DriverName
I1014 19:21:47.139155  377978 ssh_runner.go:195] Run: systemctl --version
I1014 19:21:47.139187  377978 main.go:141] libmachine: (functional-416610) Calling .GetSSHHostname
I1014 19:21:47.141967  377978 main.go:141] libmachine: (functional-416610) DBG | domain functional-416610 has defined MAC address 52:54:00:08:b4:93 in network mk-functional-416610
I1014 19:21:47.142454  377978 main.go:141] libmachine: (functional-416610) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:08:b4:93", ip: ""} in network mk-functional-416610: {Iface:virbr1 ExpiryTime:2025-10-14 20:19:11 +0000 UTC Type:0 Mac:52:54:00:08:b4:93 Iaid: IPaddr:192.168.39.139 Prefix:24 Hostname:functional-416610 Clientid:01:52:54:00:08:b4:93}
I1014 19:21:47.142484  377978 main.go:141] libmachine: (functional-416610) DBG | domain functional-416610 has defined IP address 192.168.39.139 and MAC address 52:54:00:08:b4:93 in network mk-functional-416610
I1014 19:21:47.142658  377978 main.go:141] libmachine: (functional-416610) Calling .GetSSHPort
I1014 19:21:47.142810  377978 main.go:141] libmachine: (functional-416610) Calling .GetSSHKeyPath
I1014 19:21:47.142948  377978 main.go:141] libmachine: (functional-416610) Calling .GetSSHUsername
I1014 19:21:47.143068  377978 sshutil.go:53] new ssh client: &{IP:192.168.39.139 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21409-364627/.minikube/machines/functional-416610/id_rsa Username:docker}
I1014 19:21:47.228924  377978 build_images.go:161] Building image from path: /tmp/build.3051323636.tar
I1014 19:21:47.229014  377978 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1014 19:21:47.244303  377978 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.3051323636.tar
I1014 19:21:47.250774  377978 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.3051323636.tar: stat -c "%s %y" /var/lib/minikube/build/build.3051323636.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.3051323636.tar': No such file or directory
I1014 19:21:47.250822  377978 ssh_runner.go:362] scp /tmp/build.3051323636.tar --> /var/lib/minikube/build/build.3051323636.tar (3072 bytes)
I1014 19:21:47.298212  377978 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.3051323636
I1014 19:21:47.318764  377978 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.3051323636 -xf /var/lib/minikube/build/build.3051323636.tar
I1014 19:21:47.332991  377978 crio.go:315] Building image: /var/lib/minikube/build/build.3051323636
I1014 19:21:47.333066  377978 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-416610 /var/lib/minikube/build/build.3051323636 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I1014 19:21:52.192439  377978 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-416610 /var/lib/minikube/build/build.3051323636 --cgroup-manager=cgroupfs: (4.859338574s)
I1014 19:21:52.192517  377978 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.3051323636
I1014 19:21:52.210261  377978 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.3051323636.tar
I1014 19:21:52.225716  377978 build_images.go:217] Built localhost/my-image:functional-416610 from /tmp/build.3051323636.tar
I1014 19:21:52.225773  377978 build_images.go:133] succeeded building to: functional-416610
I1014 19:21:52.225781  377978 build_images.go:134] failed building to: 
I1014 19:21:52.225812  377978 main.go:141] libmachine: Making call to close driver server
I1014 19:21:52.225835  377978 main.go:141] libmachine: (functional-416610) Calling .Close
I1014 19:21:52.226224  377978 main.go:141] libmachine: (functional-416610) DBG | Closing plugin on server side
I1014 19:21:52.226236  377978 main.go:141] libmachine: Successfully made call to close driver server
I1014 19:21:52.226253  377978 main.go:141] libmachine: Making call to close connection to plugin binary
I1014 19:21:52.226269  377978 main.go:141] libmachine: Making call to close driver server
I1014 19:21:52.226277  377978 main.go:141] libmachine: (functional-416610) Calling .Close
I1014 19:21:52.226580  377978 main.go:141] libmachine: (functional-416610) DBG | Closing plugin on server side
I1014 19:21:52.226657  377978 main.go:141] libmachine: Successfully made call to close driver server
I1014 19:21:52.226692  377978 main.go:141] libmachine: Making call to close connection to plugin binary
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-416610 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (5.62s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.76s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:357: (dbg) Done: docker pull kicbase/echo-server:1.0: (1.736789515s)
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-416610
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.76s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-416610 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-416610 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-416610 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (9.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-416610 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-416610 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:352: "hello-node-75c85bcc94-n65q6" [82533d93-f8f6-498b-840d-165b3e78a8c3] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:352: "hello-node-75c85bcc94-n65q6" [82533d93-f8f6-498b-840d-165b3e78a8c3] Running
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 9.008302224s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (9.20s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1330: Took "342.609703ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1344: Took "52.099368ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1381: Took "294.565295ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1394: Took "50.998799ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (8.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-416610 /tmp/TestFunctionalparallelMountCmdany-port4203843060/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1760469683749745351" to /tmp/TestFunctionalparallelMountCmdany-port4203843060/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1760469683749745351" to /tmp/TestFunctionalparallelMountCmdany-port4203843060/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1760469683749745351" to /tmp/TestFunctionalparallelMountCmdany-port4203843060/001/test-1760469683749745351
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-416610 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-416610 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (220.276286ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1014 19:21:23.970330  368634 retry.go:31] will retry after 377.839413ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-416610 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-416610 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Oct 14 19:21 created-by-test
-rw-r--r-- 1 docker docker 24 Oct 14 19:21 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Oct 14 19:21 test-1760469683749745351
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-416610 ssh cat /mount-9p/test-1760469683749745351
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-416610 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:352: "busybox-mount" [34996ca0-a381-47d3-a500-7b4adcf562c5] Pending
helpers_test.go:352: "busybox-mount" [34996ca0-a381-47d3-a500-7b4adcf562c5] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:352: "busybox-mount" [34996ca0-a381-47d3-a500-7b4adcf562c5] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "busybox-mount" [34996ca0-a381-47d3-a500-7b4adcf562c5] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 6.005596658s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-416610 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-416610 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-416610 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-416610 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-416610 /tmp/TestFunctionalparallelMountCmdany-port4203843060/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (8.56s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-amd64 -p functional-416610 image load --daemon kicbase/echo-server:functional-416610 --alsologtostderr
functional_test.go:370: (dbg) Done: out/minikube-linux-amd64 -p functional-416610 image load --daemon kicbase/echo-server:functional-416610 --alsologtostderr: (1.350854437s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-416610 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.67s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-416610 image load --daemon kicbase/echo-server:functional-416610 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-416610 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.03s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-416610
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-416610 image load --daemon kicbase/echo-server:functional-416610 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-416610 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.70s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-amd64 -p functional-416610 image save kicbase/echo-server:functional-416610 /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.56s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-amd64 -p functional-416610 image rm kicbase/echo-server:functional-416610 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-416610 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.58s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.72s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-416610 image load /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-416610 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.72s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-416610
functional_test.go:439: (dbg) Run:  out/minikube-linux-amd64 -p functional-416610 image save --daemon kicbase/echo-server:functional-416610 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-416610
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.57s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.9s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-amd64 -p functional-416610 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.90s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.96s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-416610 /tmp/TestFunctionalparallelMountCmdspecific-port523784998/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-416610 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-416610 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (235.447168ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1014 19:21:32.550260  368634 retry.go:31] will retry after 701.392722ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-416610 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-416610 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-416610 /tmp/TestFunctionalparallelMountCmdspecific-port523784998/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-416610 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-416610 ssh "sudo umount -f /mount-9p": exit status 1 (216.757611ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-416610 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-416610 /tmp/TestFunctionalparallelMountCmdspecific-port523784998/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.96s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.84s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-amd64 -p functional-416610 service list -o json
functional_test.go:1504: Took "837.508858ms" to run "out/minikube-linux-amd64 -p functional-416610 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.84s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-amd64 -p functional-416610 service --namespace=default --https --url hello-node
functional_test.go:1532: found endpoint: https://192.168.39.139:32450
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-amd64 -p functional-416610 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-amd64 -p functional-416610 service hello-node --url
functional_test.go:1575: found endpoint for hello-node: http://192.168.39.139:32450
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.8s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-416610 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2156254086/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-416610 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2156254086/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-416610 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2156254086/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-416610 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-416610 ssh "findmnt -T" /mount1: exit status 1 (262.960483ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1014 19:21:34.541261  368634 retry.go:31] will retry after 743.446324ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-416610 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-416610 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-416610 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-416610 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-416610 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2156254086/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-416610 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2156254086/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-416610 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2156254086/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.80s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-416610
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-416610
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-416610
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (199.49s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 -p ha-759835 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
E1014 19:23:26.585510  368634 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/addons-082251/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1014 19:23:54.296953  368634 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/addons-082251/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 -p ha-759835 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (3m18.777278253s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-759835 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/StartCluster (199.49s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (7s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 -p ha-759835 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 -p ha-759835 kubectl -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 -p ha-759835 kubectl -- rollout status deployment/busybox: (4.799008644s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-759835 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 -p ha-759835 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-759835 kubectl -- exec busybox-7b57f96db7-27whp -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-759835 kubectl -- exec busybox-7b57f96db7-d8q7p -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-759835 kubectl -- exec busybox-7b57f96db7-drvhf -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-759835 kubectl -- exec busybox-7b57f96db7-27whp -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-759835 kubectl -- exec busybox-7b57f96db7-d8q7p -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-759835 kubectl -- exec busybox-7b57f96db7-drvhf -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-759835 kubectl -- exec busybox-7b57f96db7-27whp -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-759835 kubectl -- exec busybox-7b57f96db7-d8q7p -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-759835 kubectl -- exec busybox-7b57f96db7-drvhf -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (7.00s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.24s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 -p ha-759835 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-759835 kubectl -- exec busybox-7b57f96db7-27whp -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-759835 kubectl -- exec busybox-7b57f96db7-27whp -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-759835 kubectl -- exec busybox-7b57f96db7-d8q7p -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-759835 kubectl -- exec busybox-7b57f96db7-d8q7p -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-759835 kubectl -- exec busybox-7b57f96db7-drvhf -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-759835 kubectl -- exec busybox-7b57f96db7-drvhf -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.24s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (47.18s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 -p ha-759835 node add --alsologtostderr -v 5
E1014 19:26:22.791178  368634 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/functional-416610/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1014 19:26:22.797611  368634 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/functional-416610/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1014 19:26:22.809041  368634 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/functional-416610/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1014 19:26:22.830476  368634 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/functional-416610/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1014 19:26:22.871954  368634 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/functional-416610/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1014 19:26:22.953511  368634 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/functional-416610/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1014 19:26:23.114953  368634 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/functional-416610/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1014 19:26:23.436559  368634 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/functional-416610/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1014 19:26:24.078568  368634 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/functional-416610/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1014 19:26:25.360258  368634 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/functional-416610/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1014 19:26:27.922494  368634 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/functional-416610/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 -p ha-759835 node add --alsologtostderr -v 5: (46.288005514s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-759835 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (47.18s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-759835 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.88s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.88s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (13.34s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-amd64 -p ha-759835 status --output json --alsologtostderr -v 5
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-759835 cp testdata/cp-test.txt ha-759835:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-759835 ssh -n ha-759835 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-759835 cp ha-759835:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2117183097/001/cp-test_ha-759835.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-759835 ssh -n ha-759835 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-759835 cp ha-759835:/home/docker/cp-test.txt ha-759835-m02:/home/docker/cp-test_ha-759835_ha-759835-m02.txt
E1014 19:26:33.044498  368634 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/functional-416610/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-759835 ssh -n ha-759835 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-759835 ssh -n ha-759835-m02 "sudo cat /home/docker/cp-test_ha-759835_ha-759835-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-759835 cp ha-759835:/home/docker/cp-test.txt ha-759835-m03:/home/docker/cp-test_ha-759835_ha-759835-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-759835 ssh -n ha-759835 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-759835 ssh -n ha-759835-m03 "sudo cat /home/docker/cp-test_ha-759835_ha-759835-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-759835 cp ha-759835:/home/docker/cp-test.txt ha-759835-m04:/home/docker/cp-test_ha-759835_ha-759835-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-759835 ssh -n ha-759835 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-759835 ssh -n ha-759835-m04 "sudo cat /home/docker/cp-test_ha-759835_ha-759835-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-759835 cp testdata/cp-test.txt ha-759835-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-759835 ssh -n ha-759835-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-759835 cp ha-759835-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2117183097/001/cp-test_ha-759835-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-759835 ssh -n ha-759835-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-759835 cp ha-759835-m02:/home/docker/cp-test.txt ha-759835:/home/docker/cp-test_ha-759835-m02_ha-759835.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-759835 ssh -n ha-759835-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-759835 ssh -n ha-759835 "sudo cat /home/docker/cp-test_ha-759835-m02_ha-759835.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-759835 cp ha-759835-m02:/home/docker/cp-test.txt ha-759835-m03:/home/docker/cp-test_ha-759835-m02_ha-759835-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-759835 ssh -n ha-759835-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-759835 ssh -n ha-759835-m03 "sudo cat /home/docker/cp-test_ha-759835-m02_ha-759835-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-759835 cp ha-759835-m02:/home/docker/cp-test.txt ha-759835-m04:/home/docker/cp-test_ha-759835-m02_ha-759835-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-759835 ssh -n ha-759835-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-759835 ssh -n ha-759835-m04 "sudo cat /home/docker/cp-test_ha-759835-m02_ha-759835-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-759835 cp testdata/cp-test.txt ha-759835-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-759835 ssh -n ha-759835-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-759835 cp ha-759835-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2117183097/001/cp-test_ha-759835-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-759835 ssh -n ha-759835-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-759835 cp ha-759835-m03:/home/docker/cp-test.txt ha-759835:/home/docker/cp-test_ha-759835-m03_ha-759835.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-759835 ssh -n ha-759835-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-759835 ssh -n ha-759835 "sudo cat /home/docker/cp-test_ha-759835-m03_ha-759835.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-759835 cp ha-759835-m03:/home/docker/cp-test.txt ha-759835-m02:/home/docker/cp-test_ha-759835-m03_ha-759835-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-759835 ssh -n ha-759835-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-759835 ssh -n ha-759835-m02 "sudo cat /home/docker/cp-test_ha-759835-m03_ha-759835-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-759835 cp ha-759835-m03:/home/docker/cp-test.txt ha-759835-m04:/home/docker/cp-test_ha-759835-m03_ha-759835-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-759835 ssh -n ha-759835-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-759835 ssh -n ha-759835-m04 "sudo cat /home/docker/cp-test_ha-759835-m03_ha-759835-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-759835 cp testdata/cp-test.txt ha-759835-m04:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-759835 ssh -n ha-759835-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-759835 cp ha-759835-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2117183097/001/cp-test_ha-759835-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-759835 ssh -n ha-759835-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-759835 cp ha-759835-m04:/home/docker/cp-test.txt ha-759835:/home/docker/cp-test_ha-759835-m04_ha-759835.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-759835 ssh -n ha-759835-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-759835 ssh -n ha-759835 "sudo cat /home/docker/cp-test_ha-759835-m04_ha-759835.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-759835 cp ha-759835-m04:/home/docker/cp-test.txt ha-759835-m02:/home/docker/cp-test_ha-759835-m04_ha-759835-m02.txt
E1014 19:26:43.286009  368634 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/functional-416610/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-759835 ssh -n ha-759835-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-759835 ssh -n ha-759835-m02 "sudo cat /home/docker/cp-test_ha-759835-m04_ha-759835-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-759835 cp ha-759835-m04:/home/docker/cp-test.txt ha-759835-m03:/home/docker/cp-test_ha-759835-m04_ha-759835-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-759835 ssh -n ha-759835-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-759835 ssh -n ha-759835-m03 "sudo cat /home/docker/cp-test_ha-759835-m04_ha-759835-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (13.34s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (82.73s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p ha-759835 node stop m02 --alsologtostderr -v 5
E1014 19:27:03.767599  368634 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/functional-416610/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1014 19:27:44.729673  368634 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/functional-416610/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:365: (dbg) Done: out/minikube-linux-amd64 -p ha-759835 node stop m02 --alsologtostderr -v 5: (1m22.040749618s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-amd64 -p ha-759835 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-759835 status --alsologtostderr -v 5: exit status 7 (692.478412ms)

                                                
                                                
-- stdout --
	ha-759835
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-759835-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-759835-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-759835-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1014 19:28:06.715926  382805 out.go:360] Setting OutFile to fd 1 ...
	I1014 19:28:06.716214  382805 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1014 19:28:06.716225  382805 out.go:374] Setting ErrFile to fd 2...
	I1014 19:28:06.716230  382805 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1014 19:28:06.716538  382805 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-364627/.minikube/bin
	I1014 19:28:06.716769  382805 out.go:368] Setting JSON to false
	I1014 19:28:06.716803  382805 mustload.go:65] Loading cluster: ha-759835
	I1014 19:28:06.716882  382805 notify.go:220] Checking for updates...
	I1014 19:28:06.717247  382805 config.go:182] Loaded profile config "ha-759835": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1014 19:28:06.717265  382805 status.go:174] checking status of ha-759835 ...
	I1014 19:28:06.717792  382805 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 19:28:06.717840  382805 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 19:28:06.737283  382805 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45937
	I1014 19:28:06.737792  382805 main.go:141] libmachine: () Calling .GetVersion
	I1014 19:28:06.738478  382805 main.go:141] libmachine: Using API Version  1
	I1014 19:28:06.738503  382805 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 19:28:06.738899  382805 main.go:141] libmachine: () Calling .GetMachineName
	I1014 19:28:06.739176  382805 main.go:141] libmachine: (ha-759835) Calling .GetState
	I1014 19:28:06.741429  382805 status.go:371] ha-759835 host status = "Running" (err=<nil>)
	I1014 19:28:06.741452  382805 host.go:66] Checking if "ha-759835" exists ...
	I1014 19:28:06.741803  382805 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 19:28:06.741856  382805 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 19:28:06.755977  382805 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33913
	I1014 19:28:06.756544  382805 main.go:141] libmachine: () Calling .GetVersion
	I1014 19:28:06.757116  382805 main.go:141] libmachine: Using API Version  1
	I1014 19:28:06.757145  382805 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 19:28:06.757596  382805 main.go:141] libmachine: () Calling .GetMachineName
	I1014 19:28:06.757811  382805 main.go:141] libmachine: (ha-759835) Calling .GetIP
	I1014 19:28:06.761391  382805 main.go:141] libmachine: (ha-759835) DBG | domain ha-759835 has defined MAC address 52:54:00:16:3d:d2 in network mk-ha-759835
	I1014 19:28:06.761903  382805 main.go:141] libmachine: (ha-759835) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:3d:d2", ip: ""} in network mk-ha-759835: {Iface:virbr1 ExpiryTime:2025-10-14 20:22:30 +0000 UTC Type:0 Mac:52:54:00:16:3d:d2 Iaid: IPaddr:192.168.39.251 Prefix:24 Hostname:ha-759835 Clientid:01:52:54:00:16:3d:d2}
	I1014 19:28:06.761920  382805 main.go:141] libmachine: (ha-759835) DBG | domain ha-759835 has defined IP address 192.168.39.251 and MAC address 52:54:00:16:3d:d2 in network mk-ha-759835
	I1014 19:28:06.762142  382805 host.go:66] Checking if "ha-759835" exists ...
	I1014 19:28:06.762588  382805 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 19:28:06.762639  382805 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 19:28:06.777554  382805 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46141
	I1014 19:28:06.778075  382805 main.go:141] libmachine: () Calling .GetVersion
	I1014 19:28:06.778610  382805 main.go:141] libmachine: Using API Version  1
	I1014 19:28:06.778632  382805 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 19:28:06.778973  382805 main.go:141] libmachine: () Calling .GetMachineName
	I1014 19:28:06.779160  382805 main.go:141] libmachine: (ha-759835) Calling .DriverName
	I1014 19:28:06.779405  382805 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1014 19:28:06.779430  382805 main.go:141] libmachine: (ha-759835) Calling .GetSSHHostname
	I1014 19:28:06.782647  382805 main.go:141] libmachine: (ha-759835) DBG | domain ha-759835 has defined MAC address 52:54:00:16:3d:d2 in network mk-ha-759835
	I1014 19:28:06.783148  382805 main.go:141] libmachine: (ha-759835) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:3d:d2", ip: ""} in network mk-ha-759835: {Iface:virbr1 ExpiryTime:2025-10-14 20:22:30 +0000 UTC Type:0 Mac:52:54:00:16:3d:d2 Iaid: IPaddr:192.168.39.251 Prefix:24 Hostname:ha-759835 Clientid:01:52:54:00:16:3d:d2}
	I1014 19:28:06.783175  382805 main.go:141] libmachine: (ha-759835) DBG | domain ha-759835 has defined IP address 192.168.39.251 and MAC address 52:54:00:16:3d:d2 in network mk-ha-759835
	I1014 19:28:06.783356  382805 main.go:141] libmachine: (ha-759835) Calling .GetSSHPort
	I1014 19:28:06.783613  382805 main.go:141] libmachine: (ha-759835) Calling .GetSSHKeyPath
	I1014 19:28:06.783807  382805 main.go:141] libmachine: (ha-759835) Calling .GetSSHUsername
	I1014 19:28:06.783977  382805 sshutil.go:53] new ssh client: &{IP:192.168.39.251 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21409-364627/.minikube/machines/ha-759835/id_rsa Username:docker}
	I1014 19:28:06.870257  382805 ssh_runner.go:195] Run: systemctl --version
	I1014 19:28:06.880831  382805 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1014 19:28:06.902889  382805 kubeconfig.go:125] found "ha-759835" server: "https://192.168.39.254:8443"
	I1014 19:28:06.902941  382805 api_server.go:166] Checking apiserver status ...
	I1014 19:28:06.902979  382805 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:28:06.924556  382805 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1433/cgroup
	W1014 19:28:06.941071  382805 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1433/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1014 19:28:06.941140  382805 ssh_runner.go:195] Run: ls
	I1014 19:28:06.947828  382805 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I1014 19:28:06.952781  382805 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I1014 19:28:06.952811  382805 status.go:463] ha-759835 apiserver status = Running (err=<nil>)
	I1014 19:28:06.952824  382805 status.go:176] ha-759835 status: &{Name:ha-759835 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1014 19:28:06.952847  382805 status.go:174] checking status of ha-759835-m02 ...
	I1014 19:28:06.953162  382805 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 19:28:06.953211  382805 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 19:28:06.969421  382805 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34425
	I1014 19:28:06.969906  382805 main.go:141] libmachine: () Calling .GetVersion
	I1014 19:28:06.970408  382805 main.go:141] libmachine: Using API Version  1
	I1014 19:28:06.970430  382805 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 19:28:06.970807  382805 main.go:141] libmachine: () Calling .GetMachineName
	I1014 19:28:06.971037  382805 main.go:141] libmachine: (ha-759835-m02) Calling .GetState
	I1014 19:28:06.972878  382805 status.go:371] ha-759835-m02 host status = "Stopped" (err=<nil>)
	I1014 19:28:06.972900  382805 status.go:384] host is not running, skipping remaining checks
	I1014 19:28:06.972908  382805 status.go:176] ha-759835-m02 status: &{Name:ha-759835-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1014 19:28:06.972947  382805 status.go:174] checking status of ha-759835-m03 ...
	I1014 19:28:06.973261  382805 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 19:28:06.973300  382805 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 19:28:06.987139  382805 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35327
	I1014 19:28:06.987612  382805 main.go:141] libmachine: () Calling .GetVersion
	I1014 19:28:06.988079  382805 main.go:141] libmachine: Using API Version  1
	I1014 19:28:06.988099  382805 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 19:28:06.988527  382805 main.go:141] libmachine: () Calling .GetMachineName
	I1014 19:28:06.988761  382805 main.go:141] libmachine: (ha-759835-m03) Calling .GetState
	I1014 19:28:06.990464  382805 status.go:371] ha-759835-m03 host status = "Running" (err=<nil>)
	I1014 19:28:06.990487  382805 host.go:66] Checking if "ha-759835-m03" exists ...
	I1014 19:28:06.990850  382805 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 19:28:06.990892  382805 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 19:28:07.004641  382805 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43901
	I1014 19:28:07.005075  382805 main.go:141] libmachine: () Calling .GetVersion
	I1014 19:28:07.005650  382805 main.go:141] libmachine: Using API Version  1
	I1014 19:28:07.005688  382805 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 19:28:07.006074  382805 main.go:141] libmachine: () Calling .GetMachineName
	I1014 19:28:07.006356  382805 main.go:141] libmachine: (ha-759835-m03) Calling .GetIP
	I1014 19:28:07.009381  382805 main.go:141] libmachine: (ha-759835-m03) DBG | domain ha-759835-m03 has defined MAC address 52:54:00:a7:a2:3e in network mk-ha-759835
	I1014 19:28:07.009804  382805 main.go:141] libmachine: (ha-759835-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a7:a2:3e", ip: ""} in network mk-ha-759835: {Iface:virbr1 ExpiryTime:2025-10-14 20:24:19 +0000 UTC Type:0 Mac:52:54:00:a7:a2:3e Iaid: IPaddr:192.168.39.207 Prefix:24 Hostname:ha-759835-m03 Clientid:01:52:54:00:a7:a2:3e}
	I1014 19:28:07.009844  382805 main.go:141] libmachine: (ha-759835-m03) DBG | domain ha-759835-m03 has defined IP address 192.168.39.207 and MAC address 52:54:00:a7:a2:3e in network mk-ha-759835
	I1014 19:28:07.009977  382805 host.go:66] Checking if "ha-759835-m03" exists ...
	I1014 19:28:07.010278  382805 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 19:28:07.010344  382805 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 19:28:07.025167  382805 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38661
	I1014 19:28:07.025812  382805 main.go:141] libmachine: () Calling .GetVersion
	I1014 19:28:07.026392  382805 main.go:141] libmachine: Using API Version  1
	I1014 19:28:07.026415  382805 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 19:28:07.026794  382805 main.go:141] libmachine: () Calling .GetMachineName
	I1014 19:28:07.027006  382805 main.go:141] libmachine: (ha-759835-m03) Calling .DriverName
	I1014 19:28:07.027225  382805 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1014 19:28:07.027251  382805 main.go:141] libmachine: (ha-759835-m03) Calling .GetSSHHostname
	I1014 19:28:07.030783  382805 main.go:141] libmachine: (ha-759835-m03) DBG | domain ha-759835-m03 has defined MAC address 52:54:00:a7:a2:3e in network mk-ha-759835
	I1014 19:28:07.031395  382805 main.go:141] libmachine: (ha-759835-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a7:a2:3e", ip: ""} in network mk-ha-759835: {Iface:virbr1 ExpiryTime:2025-10-14 20:24:19 +0000 UTC Type:0 Mac:52:54:00:a7:a2:3e Iaid: IPaddr:192.168.39.207 Prefix:24 Hostname:ha-759835-m03 Clientid:01:52:54:00:a7:a2:3e}
	I1014 19:28:07.031427  382805 main.go:141] libmachine: (ha-759835-m03) DBG | domain ha-759835-m03 has defined IP address 192.168.39.207 and MAC address 52:54:00:a7:a2:3e in network mk-ha-759835
	I1014 19:28:07.031644  382805 main.go:141] libmachine: (ha-759835-m03) Calling .GetSSHPort
	I1014 19:28:07.031846  382805 main.go:141] libmachine: (ha-759835-m03) Calling .GetSSHKeyPath
	I1014 19:28:07.032025  382805 main.go:141] libmachine: (ha-759835-m03) Calling .GetSSHUsername
	I1014 19:28:07.032153  382805 sshutil.go:53] new ssh client: &{IP:192.168.39.207 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21409-364627/.minikube/machines/ha-759835-m03/id_rsa Username:docker}
	I1014 19:28:07.118666  382805 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1014 19:28:07.144582  382805 kubeconfig.go:125] found "ha-759835" server: "https://192.168.39.254:8443"
	I1014 19:28:07.144615  382805 api_server.go:166] Checking apiserver status ...
	I1014 19:28:07.144656  382805 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:28:07.166890  382805 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1845/cgroup
	W1014 19:28:07.180929  382805 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1845/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1014 19:28:07.180991  382805 ssh_runner.go:195] Run: ls
	I1014 19:28:07.186607  382805 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I1014 19:28:07.193568  382805 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I1014 19:28:07.193598  382805 status.go:463] ha-759835-m03 apiserver status = Running (err=<nil>)
	I1014 19:28:07.193612  382805 status.go:176] ha-759835-m03 status: &{Name:ha-759835-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1014 19:28:07.193657  382805 status.go:174] checking status of ha-759835-m04 ...
	I1014 19:28:07.194006  382805 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 19:28:07.194058  382805 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 19:28:07.208414  382805 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42323
	I1014 19:28:07.208960  382805 main.go:141] libmachine: () Calling .GetVersion
	I1014 19:28:07.209482  382805 main.go:141] libmachine: Using API Version  1
	I1014 19:28:07.209508  382805 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 19:28:07.209966  382805 main.go:141] libmachine: () Calling .GetMachineName
	I1014 19:28:07.210212  382805 main.go:141] libmachine: (ha-759835-m04) Calling .GetState
	I1014 19:28:07.211941  382805 status.go:371] ha-759835-m04 host status = "Running" (err=<nil>)
	I1014 19:28:07.211975  382805 host.go:66] Checking if "ha-759835-m04" exists ...
	I1014 19:28:07.212253  382805 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 19:28:07.212291  382805 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 19:28:07.225741  382805 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38887
	I1014 19:28:07.226236  382805 main.go:141] libmachine: () Calling .GetVersion
	I1014 19:28:07.226755  382805 main.go:141] libmachine: Using API Version  1
	I1014 19:28:07.226782  382805 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 19:28:07.227199  382805 main.go:141] libmachine: () Calling .GetMachineName
	I1014 19:28:07.227444  382805 main.go:141] libmachine: (ha-759835-m04) Calling .GetIP
	I1014 19:28:07.231131  382805 main.go:141] libmachine: (ha-759835-m04) DBG | domain ha-759835-m04 has defined MAC address 52:54:00:5d:81:93 in network mk-ha-759835
	I1014 19:28:07.231656  382805 main.go:141] libmachine: (ha-759835-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:81:93", ip: ""} in network mk-ha-759835: {Iface:virbr1 ExpiryTime:2025-10-14 20:25:59 +0000 UTC Type:0 Mac:52:54:00:5d:81:93 Iaid: IPaddr:192.168.39.79 Prefix:24 Hostname:ha-759835-m04 Clientid:01:52:54:00:5d:81:93}
	I1014 19:28:07.231688  382805 main.go:141] libmachine: (ha-759835-m04) DBG | domain ha-759835-m04 has defined IP address 192.168.39.79 and MAC address 52:54:00:5d:81:93 in network mk-ha-759835
	I1014 19:28:07.231840  382805 host.go:66] Checking if "ha-759835-m04" exists ...
	I1014 19:28:07.232144  382805 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 19:28:07.232191  382805 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 19:28:07.246278  382805 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43929
	I1014 19:28:07.246757  382805 main.go:141] libmachine: () Calling .GetVersion
	I1014 19:28:07.247252  382805 main.go:141] libmachine: Using API Version  1
	I1014 19:28:07.247278  382805 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 19:28:07.247656  382805 main.go:141] libmachine: () Calling .GetMachineName
	I1014 19:28:07.247902  382805 main.go:141] libmachine: (ha-759835-m04) Calling .DriverName
	I1014 19:28:07.248119  382805 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1014 19:28:07.248144  382805 main.go:141] libmachine: (ha-759835-m04) Calling .GetSSHHostname
	I1014 19:28:07.251284  382805 main.go:141] libmachine: (ha-759835-m04) DBG | domain ha-759835-m04 has defined MAC address 52:54:00:5d:81:93 in network mk-ha-759835
	I1014 19:28:07.251818  382805 main.go:141] libmachine: (ha-759835-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:81:93", ip: ""} in network mk-ha-759835: {Iface:virbr1 ExpiryTime:2025-10-14 20:25:59 +0000 UTC Type:0 Mac:52:54:00:5d:81:93 Iaid: IPaddr:192.168.39.79 Prefix:24 Hostname:ha-759835-m04 Clientid:01:52:54:00:5d:81:93}
	I1014 19:28:07.251856  382805 main.go:141] libmachine: (ha-759835-m04) DBG | domain ha-759835-m04 has defined IP address 192.168.39.79 and MAC address 52:54:00:5d:81:93 in network mk-ha-759835
	I1014 19:28:07.252023  382805 main.go:141] libmachine: (ha-759835-m04) Calling .GetSSHPort
	I1014 19:28:07.252198  382805 main.go:141] libmachine: (ha-759835-m04) Calling .GetSSHKeyPath
	I1014 19:28:07.252394  382805 main.go:141] libmachine: (ha-759835-m04) Calling .GetSSHUsername
	I1014 19:28:07.252561  382805 sshutil.go:53] new ssh client: &{IP:192.168.39.79 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21409-364627/.minikube/machines/ha-759835-m04/id_rsa Username:docker}
	I1014 19:28:07.334989  382805 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1014 19:28:07.353726  382805 status.go:176] ha-759835-m04 status: &{Name:ha-759835-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (82.73s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.68s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.68s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (33.78s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p ha-759835 node start m02 --alsologtostderr -v 5
E1014 19:28:26.583382  368634 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/addons-082251/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:422: (dbg) Done: out/minikube-linux-amd64 -p ha-759835 node start m02 --alsologtostderr -v 5: (32.639963081s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-759835 status --alsologtostderr -v 5
ha_test.go:430: (dbg) Done: out/minikube-linux-amd64 -p ha-759835 status --alsologtostderr -v 5: (1.049728891s)
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (33.78s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.02s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (1.016237423s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (386.42s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-amd64 -p ha-759835 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-linux-amd64 -p ha-759835 stop --alsologtostderr -v 5
E1014 19:29:06.651755  368634 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/functional-416610/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1014 19:31:22.795543  368634 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/functional-416610/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1014 19:31:50.494528  368634 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/functional-416610/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:464: (dbg) Done: out/minikube-linux-amd64 -p ha-759835 stop --alsologtostderr -v 5: (4m24.74105148s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-amd64 -p ha-759835 start --wait true --alsologtostderr -v 5
E1014 19:33:26.583522  368634 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/addons-082251/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1014 19:34:49.660537  368634 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/addons-082251/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:469: (dbg) Done: out/minikube-linux-amd64 -p ha-759835 start --wait true --alsologtostderr -v 5: (2m1.543580892s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-amd64 -p ha-759835 node list --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (386.42s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (17.64s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-amd64 -p ha-759835 node delete m03 --alsologtostderr -v 5
ha_test.go:489: (dbg) Done: out/minikube-linux-amd64 -p ha-759835 node delete m03 --alsologtostderr -v 5: (16.808839495s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-amd64 -p ha-759835 status --alsologtostderr -v 5
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (17.64s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.65s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.65s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (262.49s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p ha-759835 stop --alsologtostderr -v 5
E1014 19:36:22.790586  368634 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/functional-416610/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1014 19:38:26.583540  368634 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/addons-082251/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:533: (dbg) Done: out/minikube-linux-amd64 -p ha-759835 stop --alsologtostderr -v 5: (4m22.379640118s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-amd64 -p ha-759835 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-759835 status --alsologtostderr -v 5: exit status 7 (113.480847ms)

                                                
                                                
-- stdout --
	ha-759835
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-759835-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-759835-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1014 19:39:49.975998  386782 out.go:360] Setting OutFile to fd 1 ...
	I1014 19:39:49.976264  386782 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1014 19:39:49.976274  386782 out.go:374] Setting ErrFile to fd 2...
	I1014 19:39:49.976278  386782 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1014 19:39:49.976528  386782 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-364627/.minikube/bin
	I1014 19:39:49.976759  386782 out.go:368] Setting JSON to false
	I1014 19:39:49.976793  386782 mustload.go:65] Loading cluster: ha-759835
	I1014 19:39:49.976855  386782 notify.go:220] Checking for updates...
	I1014 19:39:49.977336  386782 config.go:182] Loaded profile config "ha-759835": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1014 19:39:49.977362  386782 status.go:174] checking status of ha-759835 ...
	I1014 19:39:49.978013  386782 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 19:39:49.978060  386782 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 19:39:49.999143  386782 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37219
	I1014 19:39:49.999697  386782 main.go:141] libmachine: () Calling .GetVersion
	I1014 19:39:50.000387  386782 main.go:141] libmachine: Using API Version  1
	I1014 19:39:50.000418  386782 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 19:39:50.000802  386782 main.go:141] libmachine: () Calling .GetMachineName
	I1014 19:39:50.001030  386782 main.go:141] libmachine: (ha-759835) Calling .GetState
	I1014 19:39:50.002906  386782 status.go:371] ha-759835 host status = "Stopped" (err=<nil>)
	I1014 19:39:50.002921  386782 status.go:384] host is not running, skipping remaining checks
	I1014 19:39:50.002928  386782 status.go:176] ha-759835 status: &{Name:ha-759835 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1014 19:39:50.002974  386782 status.go:174] checking status of ha-759835-m02 ...
	I1014 19:39:50.003281  386782 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 19:39:50.003351  386782 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 19:39:50.016933  386782 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40381
	I1014 19:39:50.017426  386782 main.go:141] libmachine: () Calling .GetVersion
	I1014 19:39:50.017877  386782 main.go:141] libmachine: Using API Version  1
	I1014 19:39:50.017896  386782 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 19:39:50.018227  386782 main.go:141] libmachine: () Calling .GetMachineName
	I1014 19:39:50.018461  386782 main.go:141] libmachine: (ha-759835-m02) Calling .GetState
	I1014 19:39:50.020015  386782 status.go:371] ha-759835-m02 host status = "Stopped" (err=<nil>)
	I1014 19:39:50.020032  386782 status.go:384] host is not running, skipping remaining checks
	I1014 19:39:50.020040  386782 status.go:176] ha-759835-m02 status: &{Name:ha-759835-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1014 19:39:50.020066  386782 status.go:174] checking status of ha-759835-m04 ...
	I1014 19:39:50.020399  386782 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 19:39:50.020441  386782 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 19:39:50.034606  386782 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34025
	I1014 19:39:50.035049  386782 main.go:141] libmachine: () Calling .GetVersion
	I1014 19:39:50.035576  386782 main.go:141] libmachine: Using API Version  1
	I1014 19:39:50.035616  386782 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 19:39:50.035961  386782 main.go:141] libmachine: () Calling .GetMachineName
	I1014 19:39:50.036133  386782 main.go:141] libmachine: (ha-759835-m04) Calling .GetState
	I1014 19:39:50.037807  386782 status.go:371] ha-759835-m04 host status = "Stopped" (err=<nil>)
	I1014 19:39:50.037821  386782 status.go:384] host is not running, skipping remaining checks
	I1014 19:39:50.037826  386782 status.go:176] ha-759835-m04 status: &{Name:ha-759835-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (262.49s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (102.72s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-amd64 -p ha-759835 start --wait true --alsologtostderr -v 5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
E1014 19:41:22.790497  368634 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/functional-416610/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:562: (dbg) Done: out/minikube-linux-amd64 -p ha-759835 start --wait true --alsologtostderr -v 5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m41.943665099s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-amd64 -p ha-759835 status --alsologtostderr -v 5
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (102.72s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.65s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.65s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (77.47s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-amd64 -p ha-759835 node add --control-plane --alsologtostderr -v 5
E1014 19:42:45.856553  368634 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/functional-416610/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:607: (dbg) Done: out/minikube-linux-amd64 -p ha-759835 node add --control-plane --alsologtostderr -v 5: (1m16.551259203s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-amd64 -p ha-759835 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (77.47s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.89s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.89s)

                                                
                                    
x
+
TestJSONOutput/start/Command (80.02s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-521542 --output=json --user=testUser --memory=3072 --wait=true --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
E1014 19:43:26.585120  368634 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/addons-082251/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-521542 --output=json --user=testUser --memory=3072 --wait=true --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m20.016263715s)
--- PASS: TestJSONOutput/start/Command (80.02s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.75s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-521542 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.75s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.7s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-521542 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.70s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (7.17s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-521542 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-521542 --output=json --user=testUser: (7.174540914s)
--- PASS: TestJSONOutput/stop/Command (7.17s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.22s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-780259 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-780259 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (67.470881ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"5fb08f05-5827-4c1d-af93-a48676c8b82d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-780259] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"92310c05-087a-4ae3-9f9c-9eaded8476b2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21409"}}
	{"specversion":"1.0","id":"4082d91f-2eed-4973-8f83-f756027490de","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"73755ecb-00af-4e53-a6e5-0430869c8c26","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21409-364627/kubeconfig"}}
	{"specversion":"1.0","id":"bc642e09-b7ec-47b7-90fc-831937e55045","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21409-364627/.minikube"}}
	{"specversion":"1.0","id":"cd97f0f0-7dd6-406f-8f0c-f6ef1871eff7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"02ab7460-3643-4378-b682-21d4c2996356","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"6dd17dd4-2c6f-4b5c-80f2-34cee1b3b945","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-780259" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-780259
--- PASS: TestErrorJSONOutput (0.22s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:70: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (83.95s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-239109 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-239109 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (40.391549656s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-241461 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-241461 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (40.71903281s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-239109
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-241461
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-241461" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-241461
helpers_test.go:175: Cleaning up "first-239109" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-239109
--- PASS: TestMinikubeProfile (83.95s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (22.03s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-989997 --memory=3072 --mount-string /tmp/TestMountStartserial2869722291/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-989997 --memory=3072 --mount-string /tmp/TestMountStartserial2869722291/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (21.033710563s)
--- PASS: TestMountStart/serial/StartWithMountFirst (22.03s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.39s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-989997 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-989997 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.39s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (21.47s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-004999 --memory=3072 --mount-string /tmp/TestMountStartserial2869722291/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
E1014 19:46:22.790459  368634 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/functional-416610/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-004999 --memory=3072 --mount-string /tmp/TestMountStartserial2869722291/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (20.469048496s)
--- PASS: TestMountStart/serial/StartWithMountSecond (21.47s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.38s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-004999 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-004999 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.38s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.71s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-989997 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.71s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.39s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-004999 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-004999 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.39s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.26s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-004999
mount_start_test.go:196: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-004999: (1.2621438s)
--- PASS: TestMountStart/serial/Stop (1.26s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (19.64s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-004999
mount_start_test.go:207: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-004999: (18.644059185s)
--- PASS: TestMountStart/serial/RestartStopped (19.64s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.4s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-004999 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-004999 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.40s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (99.08s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-078519 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
E1014 19:48:26.583517  368634 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/addons-082251/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-078519 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m38.657436077s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-078519 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (99.08s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (5.91s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-078519 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-078519 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-078519 -- rollout status deployment/busybox: (4.359882802s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-078519 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-078519 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-078519 -- exec busybox-7b57f96db7-488qw -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-078519 -- exec busybox-7b57f96db7-bvggr -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-078519 -- exec busybox-7b57f96db7-488qw -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-078519 -- exec busybox-7b57f96db7-bvggr -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-078519 -- exec busybox-7b57f96db7-488qw -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-078519 -- exec busybox-7b57f96db7-bvggr -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (5.91s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.81s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-078519 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-078519 -- exec busybox-7b57f96db7-488qw -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-078519 -- exec busybox-7b57f96db7-488qw -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-078519 -- exec busybox-7b57f96db7-bvggr -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-078519 -- exec busybox-7b57f96db7-bvggr -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.81s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (43.37s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-078519 -v=5 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-078519 -v=5 --alsologtostderr: (42.788923456s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-078519 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (43.37s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-078519 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.07s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.6s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.60s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (7.42s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-078519 status --output json --alsologtostderr
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-078519 cp testdata/cp-test.txt multinode-078519:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-078519 ssh -n multinode-078519 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-078519 cp multinode-078519:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile384874306/001/cp-test_multinode-078519.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-078519 ssh -n multinode-078519 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-078519 cp multinode-078519:/home/docker/cp-test.txt multinode-078519-m02:/home/docker/cp-test_multinode-078519_multinode-078519-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-078519 ssh -n multinode-078519 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-078519 ssh -n multinode-078519-m02 "sudo cat /home/docker/cp-test_multinode-078519_multinode-078519-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-078519 cp multinode-078519:/home/docker/cp-test.txt multinode-078519-m03:/home/docker/cp-test_multinode-078519_multinode-078519-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-078519 ssh -n multinode-078519 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-078519 ssh -n multinode-078519-m03 "sudo cat /home/docker/cp-test_multinode-078519_multinode-078519-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-078519 cp testdata/cp-test.txt multinode-078519-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-078519 ssh -n multinode-078519-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-078519 cp multinode-078519-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile384874306/001/cp-test_multinode-078519-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-078519 ssh -n multinode-078519-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-078519 cp multinode-078519-m02:/home/docker/cp-test.txt multinode-078519:/home/docker/cp-test_multinode-078519-m02_multinode-078519.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-078519 ssh -n multinode-078519-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-078519 ssh -n multinode-078519 "sudo cat /home/docker/cp-test_multinode-078519-m02_multinode-078519.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-078519 cp multinode-078519-m02:/home/docker/cp-test.txt multinode-078519-m03:/home/docker/cp-test_multinode-078519-m02_multinode-078519-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-078519 ssh -n multinode-078519-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-078519 ssh -n multinode-078519-m03 "sudo cat /home/docker/cp-test_multinode-078519-m02_multinode-078519-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-078519 cp testdata/cp-test.txt multinode-078519-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-078519 ssh -n multinode-078519-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-078519 cp multinode-078519-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile384874306/001/cp-test_multinode-078519-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-078519 ssh -n multinode-078519-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-078519 cp multinode-078519-m03:/home/docker/cp-test.txt multinode-078519:/home/docker/cp-test_multinode-078519-m03_multinode-078519.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-078519 ssh -n multinode-078519-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-078519 ssh -n multinode-078519 "sudo cat /home/docker/cp-test_multinode-078519-m03_multinode-078519.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-078519 cp multinode-078519-m03:/home/docker/cp-test.txt multinode-078519-m02:/home/docker/cp-test_multinode-078519-m03_multinode-078519-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-078519 ssh -n multinode-078519-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-078519 ssh -n multinode-078519-m02 "sudo cat /home/docker/cp-test_multinode-078519-m03_multinode-078519-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (7.42s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.43s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-078519 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-078519 node stop m03: (1.541816639s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-078519 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-078519 status: exit status 7 (440.641391ms)

                                                
                                                
-- stdout --
	multinode-078519
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-078519-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-078519-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-078519 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-078519 status --alsologtostderr: exit status 7 (446.456454ms)

                                                
                                                
-- stdout --
	multinode-078519
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-078519-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-078519-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1014 19:49:35.454471  394432 out.go:360] Setting OutFile to fd 1 ...
	I1014 19:49:35.454668  394432 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1014 19:49:35.454683  394432 out.go:374] Setting ErrFile to fd 2...
	I1014 19:49:35.454687  394432 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1014 19:49:35.454873  394432 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-364627/.minikube/bin
	I1014 19:49:35.455079  394432 out.go:368] Setting JSON to false
	I1014 19:49:35.455120  394432 mustload.go:65] Loading cluster: multinode-078519
	I1014 19:49:35.455215  394432 notify.go:220] Checking for updates...
	I1014 19:49:35.455547  394432 config.go:182] Loaded profile config "multinode-078519": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1014 19:49:35.455567  394432 status.go:174] checking status of multinode-078519 ...
	I1014 19:49:35.456027  394432 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 19:49:35.456072  394432 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 19:49:35.471698  394432 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45539
	I1014 19:49:35.472245  394432 main.go:141] libmachine: () Calling .GetVersion
	I1014 19:49:35.472983  394432 main.go:141] libmachine: Using API Version  1
	I1014 19:49:35.473024  394432 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 19:49:35.473536  394432 main.go:141] libmachine: () Calling .GetMachineName
	I1014 19:49:35.473795  394432 main.go:141] libmachine: (multinode-078519) Calling .GetState
	I1014 19:49:35.475949  394432 status.go:371] multinode-078519 host status = "Running" (err=<nil>)
	I1014 19:49:35.475970  394432 host.go:66] Checking if "multinode-078519" exists ...
	I1014 19:49:35.476343  394432 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 19:49:35.476397  394432 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 19:49:35.491251  394432 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36521
	I1014 19:49:35.491862  394432 main.go:141] libmachine: () Calling .GetVersion
	I1014 19:49:35.492410  394432 main.go:141] libmachine: Using API Version  1
	I1014 19:49:35.492433  394432 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 19:49:35.492784  394432 main.go:141] libmachine: () Calling .GetMachineName
	I1014 19:49:35.492967  394432 main.go:141] libmachine: (multinode-078519) Calling .GetIP
	I1014 19:49:35.496193  394432 main.go:141] libmachine: (multinode-078519) DBG | domain multinode-078519 has defined MAC address 52:54:00:a3:4c:9a in network mk-multinode-078519
	I1014 19:49:35.496673  394432 main.go:141] libmachine: (multinode-078519) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:4c:9a", ip: ""} in network mk-multinode-078519: {Iface:virbr1 ExpiryTime:2025-10-14 20:47:11 +0000 UTC Type:0 Mac:52:54:00:a3:4c:9a Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:multinode-078519 Clientid:01:52:54:00:a3:4c:9a}
	I1014 19:49:35.496703  394432 main.go:141] libmachine: (multinode-078519) DBG | domain multinode-078519 has defined IP address 192.168.39.57 and MAC address 52:54:00:a3:4c:9a in network mk-multinode-078519
	I1014 19:49:35.496910  394432 host.go:66] Checking if "multinode-078519" exists ...
	I1014 19:49:35.497262  394432 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 19:49:35.497331  394432 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 19:49:35.511735  394432 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44335
	I1014 19:49:35.512220  394432 main.go:141] libmachine: () Calling .GetVersion
	I1014 19:49:35.512873  394432 main.go:141] libmachine: Using API Version  1
	I1014 19:49:35.512914  394432 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 19:49:35.513271  394432 main.go:141] libmachine: () Calling .GetMachineName
	I1014 19:49:35.513522  394432 main.go:141] libmachine: (multinode-078519) Calling .DriverName
	I1014 19:49:35.513748  394432 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1014 19:49:35.513786  394432 main.go:141] libmachine: (multinode-078519) Calling .GetSSHHostname
	I1014 19:49:35.517273  394432 main.go:141] libmachine: (multinode-078519) DBG | domain multinode-078519 has defined MAC address 52:54:00:a3:4c:9a in network mk-multinode-078519
	I1014 19:49:35.517784  394432 main.go:141] libmachine: (multinode-078519) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:4c:9a", ip: ""} in network mk-multinode-078519: {Iface:virbr1 ExpiryTime:2025-10-14 20:47:11 +0000 UTC Type:0 Mac:52:54:00:a3:4c:9a Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:multinode-078519 Clientid:01:52:54:00:a3:4c:9a}
	I1014 19:49:35.517812  394432 main.go:141] libmachine: (multinode-078519) DBG | domain multinode-078519 has defined IP address 192.168.39.57 and MAC address 52:54:00:a3:4c:9a in network mk-multinode-078519
	I1014 19:49:35.518004  394432 main.go:141] libmachine: (multinode-078519) Calling .GetSSHPort
	I1014 19:49:35.518208  394432 main.go:141] libmachine: (multinode-078519) Calling .GetSSHKeyPath
	I1014 19:49:35.518389  394432 main.go:141] libmachine: (multinode-078519) Calling .GetSSHUsername
	I1014 19:49:35.518554  394432 sshutil.go:53] new ssh client: &{IP:192.168.39.57 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21409-364627/.minikube/machines/multinode-078519/id_rsa Username:docker}
	I1014 19:49:35.601800  394432 ssh_runner.go:195] Run: systemctl --version
	I1014 19:49:35.608845  394432 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1014 19:49:35.628613  394432 kubeconfig.go:125] found "multinode-078519" server: "https://192.168.39.57:8443"
	I1014 19:49:35.628651  394432 api_server.go:166] Checking apiserver status ...
	I1014 19:49:35.628688  394432 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 19:49:35.649496  394432 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1360/cgroup
	W1014 19:49:35.662185  394432 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1360/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1014 19:49:35.662271  394432 ssh_runner.go:195] Run: ls
	I1014 19:49:35.667211  394432 api_server.go:253] Checking apiserver healthz at https://192.168.39.57:8443/healthz ...
	I1014 19:49:35.672613  394432 api_server.go:279] https://192.168.39.57:8443/healthz returned 200:
	ok
	I1014 19:49:35.672635  394432 status.go:463] multinode-078519 apiserver status = Running (err=<nil>)
	I1014 19:49:35.672645  394432 status.go:176] multinode-078519 status: &{Name:multinode-078519 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1014 19:49:35.672660  394432 status.go:174] checking status of multinode-078519-m02 ...
	I1014 19:49:35.672983  394432 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 19:49:35.673026  394432 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 19:49:35.687368  394432 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34187
	I1014 19:49:35.687810  394432 main.go:141] libmachine: () Calling .GetVersion
	I1014 19:49:35.688370  394432 main.go:141] libmachine: Using API Version  1
	I1014 19:49:35.688397  394432 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 19:49:35.688801  394432 main.go:141] libmachine: () Calling .GetMachineName
	I1014 19:49:35.688979  394432 main.go:141] libmachine: (multinode-078519-m02) Calling .GetState
	I1014 19:49:35.690841  394432 status.go:371] multinode-078519-m02 host status = "Running" (err=<nil>)
	I1014 19:49:35.690862  394432 host.go:66] Checking if "multinode-078519-m02" exists ...
	I1014 19:49:35.691235  394432 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 19:49:35.691290  394432 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 19:49:35.705073  394432 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37901
	I1014 19:49:35.705528  394432 main.go:141] libmachine: () Calling .GetVersion
	I1014 19:49:35.705983  394432 main.go:141] libmachine: Using API Version  1
	I1014 19:49:35.706006  394432 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 19:49:35.706371  394432 main.go:141] libmachine: () Calling .GetMachineName
	I1014 19:49:35.706583  394432 main.go:141] libmachine: (multinode-078519-m02) Calling .GetIP
	I1014 19:49:35.709469  394432 main.go:141] libmachine: (multinode-078519-m02) DBG | domain multinode-078519-m02 has defined MAC address 52:54:00:c9:62:d0 in network mk-multinode-078519
	I1014 19:49:35.709895  394432 main.go:141] libmachine: (multinode-078519-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c9:62:d0", ip: ""} in network mk-multinode-078519: {Iface:virbr1 ExpiryTime:2025-10-14 20:48:05 +0000 UTC Type:0 Mac:52:54:00:c9:62:d0 Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:multinode-078519-m02 Clientid:01:52:54:00:c9:62:d0}
	I1014 19:49:35.709944  394432 main.go:141] libmachine: (multinode-078519-m02) DBG | domain multinode-078519-m02 has defined IP address 192.168.39.211 and MAC address 52:54:00:c9:62:d0 in network mk-multinode-078519
	I1014 19:49:35.710089  394432 host.go:66] Checking if "multinode-078519-m02" exists ...
	I1014 19:49:35.710446  394432 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 19:49:35.710491  394432 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 19:49:35.724683  394432 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45137
	I1014 19:49:35.725162  394432 main.go:141] libmachine: () Calling .GetVersion
	I1014 19:49:35.725668  394432 main.go:141] libmachine: Using API Version  1
	I1014 19:49:35.725695  394432 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 19:49:35.726047  394432 main.go:141] libmachine: () Calling .GetMachineName
	I1014 19:49:35.726271  394432 main.go:141] libmachine: (multinode-078519-m02) Calling .DriverName
	I1014 19:49:35.726469  394432 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1014 19:49:35.726564  394432 main.go:141] libmachine: (multinode-078519-m02) Calling .GetSSHHostname
	I1014 19:49:35.729992  394432 main.go:141] libmachine: (multinode-078519-m02) DBG | domain multinode-078519-m02 has defined MAC address 52:54:00:c9:62:d0 in network mk-multinode-078519
	I1014 19:49:35.730520  394432 main.go:141] libmachine: (multinode-078519-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c9:62:d0", ip: ""} in network mk-multinode-078519: {Iface:virbr1 ExpiryTime:2025-10-14 20:48:05 +0000 UTC Type:0 Mac:52:54:00:c9:62:d0 Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:multinode-078519-m02 Clientid:01:52:54:00:c9:62:d0}
	I1014 19:49:35.730569  394432 main.go:141] libmachine: (multinode-078519-m02) DBG | domain multinode-078519-m02 has defined IP address 192.168.39.211 and MAC address 52:54:00:c9:62:d0 in network mk-multinode-078519
	I1014 19:49:35.730746  394432 main.go:141] libmachine: (multinode-078519-m02) Calling .GetSSHPort
	I1014 19:49:35.730952  394432 main.go:141] libmachine: (multinode-078519-m02) Calling .GetSSHKeyPath
	I1014 19:49:35.731106  394432 main.go:141] libmachine: (multinode-078519-m02) Calling .GetSSHUsername
	I1014 19:49:35.731240  394432 sshutil.go:53] new ssh client: &{IP:192.168.39.211 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21409-364627/.minikube/machines/multinode-078519-m02/id_rsa Username:docker}
	I1014 19:49:35.815441  394432 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1014 19:49:35.831155  394432 status.go:176] multinode-078519-m02 status: &{Name:multinode-078519-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1014 19:49:35.831195  394432 status.go:174] checking status of multinode-078519-m03 ...
	I1014 19:49:35.831570  394432 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 19:49:35.831615  394432 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 19:49:35.846717  394432 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46047
	I1014 19:49:35.847262  394432 main.go:141] libmachine: () Calling .GetVersion
	I1014 19:49:35.847725  394432 main.go:141] libmachine: Using API Version  1
	I1014 19:49:35.847748  394432 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 19:49:35.848153  394432 main.go:141] libmachine: () Calling .GetMachineName
	I1014 19:49:35.848377  394432 main.go:141] libmachine: (multinode-078519-m03) Calling .GetState
	I1014 19:49:35.850220  394432 status.go:371] multinode-078519-m03 host status = "Stopped" (err=<nil>)
	I1014 19:49:35.850239  394432 status.go:384] host is not running, skipping remaining checks
	I1014 19:49:35.850247  394432 status.go:176] multinode-078519-m03 status: &{Name:multinode-078519-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.43s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (37.05s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-078519 node start m03 -v=5 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-078519 node start m03 -v=5 --alsologtostderr: (36.356099584s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-078519 status -v=5 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (37.05s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (273.32s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-078519
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-078519
E1014 19:51:22.790912  368634 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/functional-416610/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1014 19:51:29.663286  368634 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/addons-082251/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:321: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-078519: (2m26.631975464s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-078519 --wait=true -v=5 --alsologtostderr
E1014 19:53:26.585583  368634 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/addons-082251/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-078519 --wait=true -v=5 --alsologtostderr: (2m6.57826267s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-078519
--- PASS: TestMultiNode/serial/RestartKeepsNodes (273.32s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (2.95s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-078519 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-078519 node delete m03: (2.396369785s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-078519 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (2.95s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (176.72s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-078519 stop
E1014 19:56:22.794651  368634 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/functional-416610/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:345: (dbg) Done: out/minikube-linux-amd64 -p multinode-078519 stop: (2m56.534591919s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-078519 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-078519 status: exit status 7 (96.193683ms)

                                                
                                                
-- stdout --
	multinode-078519
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-078519-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-078519 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-078519 status --alsologtostderr: exit status 7 (87.390503ms)

                                                
                                                
-- stdout --
	multinode-078519
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-078519-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1014 19:57:45.855170  397095 out.go:360] Setting OutFile to fd 1 ...
	I1014 19:57:45.855439  397095 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1014 19:57:45.855449  397095 out.go:374] Setting ErrFile to fd 2...
	I1014 19:57:45.855453  397095 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1014 19:57:45.855637  397095 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-364627/.minikube/bin
	I1014 19:57:45.855822  397095 out.go:368] Setting JSON to false
	I1014 19:57:45.855851  397095 mustload.go:65] Loading cluster: multinode-078519
	I1014 19:57:45.855955  397095 notify.go:220] Checking for updates...
	I1014 19:57:45.856259  397095 config.go:182] Loaded profile config "multinode-078519": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1014 19:57:45.856278  397095 status.go:174] checking status of multinode-078519 ...
	I1014 19:57:45.856784  397095 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 19:57:45.856832  397095 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 19:57:45.871208  397095 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40139
	I1014 19:57:45.871744  397095 main.go:141] libmachine: () Calling .GetVersion
	I1014 19:57:45.872488  397095 main.go:141] libmachine: Using API Version  1
	I1014 19:57:45.872532  397095 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 19:57:45.873027  397095 main.go:141] libmachine: () Calling .GetMachineName
	I1014 19:57:45.873301  397095 main.go:141] libmachine: (multinode-078519) Calling .GetState
	I1014 19:57:45.875251  397095 status.go:371] multinode-078519 host status = "Stopped" (err=<nil>)
	I1014 19:57:45.875266  397095 status.go:384] host is not running, skipping remaining checks
	I1014 19:57:45.875272  397095 status.go:176] multinode-078519 status: &{Name:multinode-078519 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1014 19:57:45.875302  397095 status.go:174] checking status of multinode-078519-m02 ...
	I1014 19:57:45.875625  397095 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 19:57:45.875683  397095 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 19:57:45.889532  397095 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37587
	I1014 19:57:45.890086  397095 main.go:141] libmachine: () Calling .GetVersion
	I1014 19:57:45.890701  397095 main.go:141] libmachine: Using API Version  1
	I1014 19:57:45.890731  397095 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 19:57:45.891126  397095 main.go:141] libmachine: () Calling .GetMachineName
	I1014 19:57:45.891385  397095 main.go:141] libmachine: (multinode-078519-m02) Calling .GetState
	I1014 19:57:45.893172  397095 status.go:371] multinode-078519-m02 host status = "Stopped" (err=<nil>)
	I1014 19:57:45.893188  397095 status.go:384] host is not running, skipping remaining checks
	I1014 19:57:45.893196  397095 status.go:176] multinode-078519-m02 status: &{Name:multinode-078519-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (176.72s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (86.38s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-078519 --wait=true -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
E1014 19:58:26.583957  368634 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/addons-082251/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-078519 --wait=true -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m25.810909663s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-078519 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (86.38s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (39.7s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-078519
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-078519-m02 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-078519-m02 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: exit status 14 (70.502975ms)

                                                
                                                
-- stdout --
	* [multinode-078519-m02] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21409
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21409-364627/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21409-364627/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-078519-m02' is duplicated with machine name 'multinode-078519-m02' in profile 'multinode-078519'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-078519-m03 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
E1014 19:59:25.860582  368634 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/functional-416610/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-078519-m03 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (38.503736341s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-078519
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-078519: exit status 80 (229.994746ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-078519 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-078519-m03 already exists in multinode-078519-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-078519-m03
--- PASS: TestMultiNode/serial/ValidateNameConflict (39.70s)

                                                
                                    
x
+
TestScheduledStopUnix (108.4s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-464504 --memory=3072 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-464504 --memory=3072 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (36.590392572s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-464504 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-464504 -n scheduled-stop-464504
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-464504 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
I1014 20:03:09.895902  368634 retry.go:31] will retry after 69.246µs: open /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/scheduled-stop-464504/pid: no such file or directory
I1014 20:03:09.897050  368634 retry.go:31] will retry after 222.939µs: open /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/scheduled-stop-464504/pid: no such file or directory
I1014 20:03:09.898183  368634 retry.go:31] will retry after 304.254µs: open /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/scheduled-stop-464504/pid: no such file or directory
I1014 20:03:09.899323  368634 retry.go:31] will retry after 178.794µs: open /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/scheduled-stop-464504/pid: no such file or directory
I1014 20:03:09.900448  368634 retry.go:31] will retry after 604.861µs: open /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/scheduled-stop-464504/pid: no such file or directory
I1014 20:03:09.901551  368634 retry.go:31] will retry after 522.31µs: open /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/scheduled-stop-464504/pid: no such file or directory
I1014 20:03:09.902681  368634 retry.go:31] will retry after 634.648µs: open /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/scheduled-stop-464504/pid: no such file or directory
I1014 20:03:09.903794  368634 retry.go:31] will retry after 2.064019ms: open /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/scheduled-stop-464504/pid: no such file or directory
I1014 20:03:09.905945  368634 retry.go:31] will retry after 3.041402ms: open /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/scheduled-stop-464504/pid: no such file or directory
I1014 20:03:09.909115  368634 retry.go:31] will retry after 2.730794ms: open /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/scheduled-stop-464504/pid: no such file or directory
I1014 20:03:09.912362  368634 retry.go:31] will retry after 3.735051ms: open /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/scheduled-stop-464504/pid: no such file or directory
I1014 20:03:09.916601  368634 retry.go:31] will retry after 9.633558ms: open /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/scheduled-stop-464504/pid: no such file or directory
I1014 20:03:09.926902  368634 retry.go:31] will retry after 8.489624ms: open /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/scheduled-stop-464504/pid: no such file or directory
I1014 20:03:09.936203  368634 retry.go:31] will retry after 14.588097ms: open /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/scheduled-stop-464504/pid: no such file or directory
I1014 20:03:09.951504  368634 retry.go:31] will retry after 21.171449ms: open /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/scheduled-stop-464504/pid: no such file or directory
I1014 20:03:09.973807  368634 retry.go:31] will retry after 55.274857ms: open /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/scheduled-stop-464504/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-464504 --cancel-scheduled
E1014 20:03:26.585842  368634 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/addons-082251/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-464504 -n scheduled-stop-464504
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-464504
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-464504 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-464504
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-464504: exit status 7 (78.74763ms)

                                                
                                                
-- stdout --
	scheduled-stop-464504
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-464504 -n scheduled-stop-464504
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-464504 -n scheduled-stop-464504: exit status 7 (78.67479ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-464504" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-464504
--- PASS: TestScheduledStopUnix (108.40s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (149.03s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.32.0.1185497733 start -p running-upgrade-370635 --memory=3072 --vm-driver=kvm2  --container-runtime=crio --auto-update-drivers=false
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.32.0.1185497733 start -p running-upgrade-370635 --memory=3072 --vm-driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m38.000842152s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-370635 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-370635 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (47.156884623s)
helpers_test.go:175: Cleaning up "running-upgrade-370635" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-370635
--- PASS: TestRunningBinaryUpgrade (149.03s)

                                                
                                    
x
+
TestKubernetesUpgrade (130.84s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-425560 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-425560 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (40.744328424s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-425560
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-425560: (1.874479041s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-425560 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-425560 status --format={{.Host}}: exit status 7 (67.411132ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-425560 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-425560 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (42.658122574s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-425560 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-425560 --memory=3072 --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-425560 --memory=3072 --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: exit status 106 (105.355364ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-425560] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21409
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21409-364627/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21409-364627/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.34.1 cluster to v1.28.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.28.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-425560
	    minikube start -p kubernetes-upgrade-425560 --kubernetes-version=v1.28.0
	    
	    2) Create a second cluster with Kubernetes 1.28.0, by running:
	    
	    minikube start -p kubernetes-upgrade-4255602 --kubernetes-version=v1.28.0
	    
	    3) Use the existing cluster at version Kubernetes 1.34.1, by running:
	    
	    minikube start -p kubernetes-upgrade-425560 --kubernetes-version=v1.34.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-425560 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-425560 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (44.158207594s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-425560" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-425560
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-425560: (1.150368892s)
--- PASS: TestKubernetesUpgrade (130.84s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:85: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-280962 --no-kubernetes --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
no_kubernetes_test.go:85: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-280962 --no-kubernetes --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: exit status 14 (89.902612ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-280962] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21409
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21409-364627/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21409-364627/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                    
x
+
TestPause/serial/Start (106.31s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-488160 --memory=3072 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-488160 --memory=3072 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m46.306039955s)
--- PASS: TestPause/serial/Start (106.31s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (82.84s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:97: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-280962 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
no_kubernetes_test.go:97: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-280962 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m22.540131615s)
no_kubernetes_test.go:202: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-280962 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (82.84s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (8.06s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:114: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-280962 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
no_kubernetes_test.go:114: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-280962 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (6.927886814s)
no_kubernetes_test.go:202: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-280962 status -o json
no_kubernetes_test.go:202: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-280962 status -o json: exit status 2 (258.46468ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-280962","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:126: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-280962
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (8.06s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (37.78s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:138: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-280962 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
no_kubernetes_test.go:138: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-280962 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (37.774865123s)
--- PASS: TestNoKubernetes/serial/Start (37.78s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.21s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-280962 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-280962 "sudo systemctl is-active --quiet service kubelet": exit status 1 (214.699865ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 4

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.21s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (5.73s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:171: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:171: (dbg) Done: out/minikube-linux-amd64 profile list: (4.380053855s)
no_kubernetes_test.go:181: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
no_kubernetes_test.go:181: (dbg) Done: out/minikube-linux-amd64 profile list --output=json: (1.344992186s)
--- PASS: TestNoKubernetes/serial/ProfileList (5.73s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.31s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:160: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-280962
no_kubernetes_test.go:160: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-280962: (1.307167422s)
--- PASS: TestNoKubernetes/serial/Stop (1.31s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (40.73s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:193: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-280962 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
no_kubernetes_test.go:193: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-280962 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (40.72542727s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (40.73s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.23s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-280962 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-280962 "sudo systemctl is-active --quiet service kubelet": exit status 1 (230.776056ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 4

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.23s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (2.62s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (2.62s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (130.58s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.32.0.1037328133 start -p stopped-upgrade-474265 --memory=3072 --vm-driver=kvm2  --container-runtime=crio --auto-update-drivers=false
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.32.0.1037328133 start -p stopped-upgrade-474265 --memory=3072 --vm-driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (58.712613961s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.32.0.1037328133 -p stopped-upgrade-474265 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.32.0.1037328133 -p stopped-upgrade-474265 stop: (1.721621696s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-474265 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
E1014 20:08:26.583639  368634 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/addons-082251/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-474265 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m10.143563284s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (130.58s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-880673 --memory=3072 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-880673 --memory=3072 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: exit status 14 (112.873323ms)

                                                
                                                
-- stdout --
	* [false-880673] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21409
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21409-364627/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21409-364627/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1014 20:07:27.730792  404864 out.go:360] Setting OutFile to fd 1 ...
	I1014 20:07:27.731098  404864 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1014 20:07:27.731110  404864 out.go:374] Setting ErrFile to fd 2...
	I1014 20:07:27.731115  404864 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1014 20:07:27.731340  404864 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-364627/.minikube/bin
	I1014 20:07:27.732305  404864 out.go:368] Setting JSON to false
	I1014 20:07:27.734124  404864 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":6591,"bootTime":1760465857,"procs":212,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1014 20:07:27.734278  404864 start.go:141] virtualization: kvm guest
	I1014 20:07:27.736118  404864 out.go:179] * [false-880673] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1014 20:07:27.738017  404864 notify.go:220] Checking for updates...
	I1014 20:07:27.738047  404864 out.go:179]   - MINIKUBE_LOCATION=21409
	I1014 20:07:27.739305  404864 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1014 20:07:27.740693  404864 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21409-364627/kubeconfig
	I1014 20:07:27.742006  404864 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21409-364627/.minikube
	I1014 20:07:27.743373  404864 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1014 20:07:27.744873  404864 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1014 20:07:27.746458  404864 config.go:182] Loaded profile config "force-systemd-env-702842": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1014 20:07:27.746569  404864 config.go:182] Loaded profile config "kubernetes-upgrade-425560": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1014 20:07:27.746650  404864 config.go:182] Loaded profile config "stopped-upgrade-474265": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1014 20:07:27.746733  404864 driver.go:421] Setting default libvirt URI to qemu:///system
	I1014 20:07:27.788959  404864 out.go:179] * Using the kvm2 driver based on user configuration
	I1014 20:07:27.790151  404864 start.go:305] selected driver: kvm2
	I1014 20:07:27.790169  404864 start.go:925] validating driver "kvm2" against <nil>
	I1014 20:07:27.790187  404864 start.go:936] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1014 20:07:27.792062  404864 out.go:203] 
	W1014 20:07:27.793114  404864 out.go:285] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I1014 20:07:27.794093  404864 out.go:203] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-880673 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-880673

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-880673

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-880673

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-880673

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-880673

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-880673

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-880673

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-880673

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-880673

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-880673

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-880673" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-880673"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-880673" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-880673"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-880673" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-880673"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-880673

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-880673" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-880673"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-880673" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-880673"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-880673" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-880673" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-880673" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-880673" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-880673" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-880673" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-880673" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-880673" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-880673" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-880673"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-880673" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-880673"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-880673" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-880673"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-880673" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-880673"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-880673" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-880673"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-880673" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-880673" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-880673" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-880673" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-880673"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-880673" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-880673"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-880673" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-880673"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-880673" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-880673"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-880673" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-880673"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21409-364627/.minikube/ca.crt
extensions:
- extension:
last-update: Tue, 14 Oct 2025 20:07:13 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.83.247:8443
name: kubernetes-upgrade-425560
contexts:
- context:
cluster: kubernetes-upgrade-425560
extensions:
- extension:
last-update: Tue, 14 Oct 2025 20:07:13 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: kubernetes-upgrade-425560
name: kubernetes-upgrade-425560
current-context: kubernetes-upgrade-425560
kind: Config
users:
- name: kubernetes-upgrade-425560
user:
client-certificate: /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/kubernetes-upgrade-425560/client.crt
client-key: /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/kubernetes-upgrade-425560/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-880673

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-880673" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-880673"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-880673" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-880673"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-880673" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-880673"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-880673" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-880673"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-880673" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-880673"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-880673" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-880673"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-880673" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-880673"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-880673" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-880673"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-880673" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-880673"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-880673" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-880673"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-880673" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-880673"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-880673" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-880673"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-880673" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-880673"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-880673" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-880673"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-880673" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-880673"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-880673" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-880673"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-880673" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-880673"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-880673" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-880673"

                                                
                                                
----------------------- debugLogs end: false-880673 [took: 3.087186821s] --------------------------------
helpers_test.go:175: Cleaning up "false-880673" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-880673
--- PASS: TestNetworkPlugins/group/false (3.35s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (71.13s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-652371 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.28.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-652371 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.28.0: (1m11.131670101s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (71.13s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (89.42s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-062000 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-062000 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.1: (1m29.421618629s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (89.42s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.04s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-474265
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-amd64 logs -p stopped-upgrade-474265: (1.039719815s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.04s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (126.24s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-905132 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-905132 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.1: (2m6.238286856s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (126.24s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (11.38s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-652371 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [0b739b52-7f38-46b2-967e-b88ba93f3cad] Pending
helpers_test.go:352: "busybox" [0b739b52-7f38-46b2-967e-b88ba93f3cad] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [0b739b52-7f38-46b2-967e-b88ba93f3cad] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 11.004880628s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-652371 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (11.38s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.18s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-652371 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-652371 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.096600846s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-652371 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.18s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (86.43s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-652371 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-652371 --alsologtostderr -v=3: (1m26.433423079s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (86.43s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.35s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-062000 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [21312b78-c7d4-4025-864f-bf8e26ee91e2] Pending
helpers_test.go:352: "busybox" [21312b78-c7d4-4025-864f-bf8e26ee91e2] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [21312b78-c7d4-4025-864f-bf8e26ee91e2] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 10.005106188s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-062000 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.35s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.13s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-062000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-062000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.051404884s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-062000 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.13s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (81.17s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-062000 --alsologtostderr -v=3
E1014 20:11:22.790569  368634 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/functional-416610/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-062000 --alsologtostderr -v=3: (1m21.170700543s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (81.17s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (10.36s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-905132 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [d66147c2-e0b0-4d75-812a-1ef55a4c21a6] Pending
helpers_test.go:352: "busybox" [d66147c2-e0b0-4d75-812a-1ef55a4c21a6] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [d66147c2-e0b0-4d75-812a-1ef55a4c21a6] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 10.005508097s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-905132 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (10.36s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.23s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-905132 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p no-preload-905132 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.140732642s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-905132 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.23s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-652371 -n old-k8s-version-652371
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-652371 -n old-k8s-version-652371: exit status 7 (88.104512ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-652371 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.23s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (44.74s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-652371 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.28.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-652371 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.28.0: (44.451446947s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-652371 -n old-k8s-version-652371
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (44.74s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (82.56s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-905132 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-905132 --alsologtostderr -v=3: (1m22.562948862s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (82.56s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (55.81s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-158674 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-158674 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.1: (55.81465519s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (55.81s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-062000 -n default-k8s-diff-port-062000
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-062000 -n default-k8s-diff-port-062000: exit status 7 (79.598545ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-062000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (65.33s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-062000 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-062000 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.1: (1m4.917067463s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-062000 -n default-k8s-diff-port-062000
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (65.33s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (12.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-jx724" [b8105406-931d-44c7-8c0d-e97221358802] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-jx724" [b8105406-931d-44c7-8c0d-e97221358802] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 12.00486231s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (12.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-jx724" [b8105406-931d-44c7-8c0d-e97221358802] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004881314s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-652371 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-652371 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (3.46s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-652371 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-amd64 pause -p old-k8s-version-652371 --alsologtostderr -v=1: (1.223007049s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-652371 -n old-k8s-version-652371
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-652371 -n old-k8s-version-652371: exit status 2 (275.376458ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-652371 -n old-k8s-version-652371
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-652371 -n old-k8s-version-652371: exit status 2 (298.241763ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-652371 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-652371 -n old-k8s-version-652371
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-652371 -n old-k8s-version-652371
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (3.46s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (49.56s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-976208 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-976208 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.1: (49.563422486s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (49.56s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-905132 -n no-preload-905132
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-905132 -n no-preload-905132: exit status 7 (75.769472ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-905132 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.23s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (65.65s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-905132 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-905132 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.1: (1m5.295709458s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-905132 -n no-preload-905132
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (65.65s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (10.48s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-158674 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [f752530f-0f63-471b-86c5-be4cafc867f8] Pending
helpers_test.go:352: "busybox" [f752530f-0f63-471b-86c5-be4cafc867f8] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [f752530f-0f63-471b-86c5-be4cafc867f8] Running
E1014 20:13:26.583293  368634 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/addons-082251/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 10.159832102s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-158674 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (10.48s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.39s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-158674 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-158674 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.274088621s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-158674 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.39s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (8.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-696n4" [3bef3233-3c67-4c96-91cb-06dc4fb264db] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-696n4" [3bef3233-3c67-4c96-91cb-06dc4fb264db] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 8.005817405s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (8.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (82.53s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-158674 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-158674 --alsologtostderr -v=3: (1m22.528831667s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (82.53s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-696n4" [3bef3233-3c67-4c96-91cb-06dc4fb264db] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.007074876s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-062000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.36s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-062000 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.36s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (3.34s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-062000 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-062000 -n default-k8s-diff-port-062000
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-062000 -n default-k8s-diff-port-062000: exit status 2 (296.473726ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-062000 -n default-k8s-diff-port-062000
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-062000 -n default-k8s-diff-port-062000: exit status 2 (273.237822ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-062000 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-amd64 unpause -p default-k8s-diff-port-062000 --alsologtostderr -v=1: (1.271877338s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-062000 -n default-k8s-diff-port-062000
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-062000 -n default-k8s-diff-port-062000
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (3.34s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.72s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-976208 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-976208 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.720637399s)
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.72s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (82.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-880673 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-880673 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m22.018147993s)
--- PASS: TestNetworkPlugins/group/auto/Start (82.02s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (8.06s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-976208 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-976208 --alsologtostderr -v=3: (8.054990255s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (8.06s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-976208 -n newest-cni-976208
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-976208 -n newest-cni-976208: exit status 7 (80.592664ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-976208 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (45.55s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-976208 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-976208 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.1: (45.190611899s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-976208 -n newest-cni-976208
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (45.55s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (14.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-nf56t" [d7292376-7d65-4528-b917-5332e61ea4df] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-nf56t" [d7292376-7d65-4528-b917-5332e61ea4df] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 14.004556463s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (14.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.12s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-nf56t" [d7292376-7d65-4528-b917-5332e61ea4df] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004919706s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-905132 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.12s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.29s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-905132 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.29s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (3.13s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-905132 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-amd64 pause -p no-preload-905132 --alsologtostderr -v=1: (1.000915765s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-905132 -n no-preload-905132
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-905132 -n no-preload-905132: exit status 2 (289.900038ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-905132 -n no-preload-905132
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-905132 -n no-preload-905132: exit status 2 (281.588305ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-905132 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-905132 -n no-preload-905132
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-905132 -n no-preload-905132
--- PASS: TestStartStop/group/no-preload/serial/Pause (3.13s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.34s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-976208 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (94.59s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-880673 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-880673 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m34.59444558s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (94.59s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (3.94s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-976208 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-amd64 pause -p newest-cni-976208 --alsologtostderr -v=1: (1.390930901s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-976208 -n newest-cni-976208
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-976208 -n newest-cni-976208: exit status 2 (337.718528ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-976208 -n newest-cni-976208
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-976208 -n newest-cni-976208: exit status 2 (333.728946ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-976208 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-amd64 unpause -p newest-cni-976208 --alsologtostderr -v=1: (1.065900046s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-976208 -n newest-cni-976208
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-976208 -n newest-cni-976208
--- PASS: TestStartStop/group/newest-cni/serial/Pause (3.94s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (103.97s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-880673 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-880673 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m43.974760793s)
--- PASS: TestNetworkPlugins/group/calico/Start (103.97s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-158674 -n embed-certs-158674
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-158674 -n embed-certs-158674: exit status 7 (77.573108ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-158674 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (80.98s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-158674 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-158674 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.1: (1m20.683632595s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-158674 -n embed-certs-158674
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (80.98s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-880673 "pgrep -a kubelet"
I1014 20:15:13.405868  368634 config.go:182] Loaded profile config "auto-880673": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (12.06s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-880673 replace --force -f testdata/netcat-deployment.yaml
E1014 20:15:14.407895  368634 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/old-k8s-version-652371/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1014 20:15:14.414384  368634 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/old-k8s-version-652371/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1014 20:15:14.425900  368634 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/old-k8s-version-652371/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1014 20:15:14.447412  368634 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/old-k8s-version-652371/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1014 20:15:14.488993  368634 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/old-k8s-version-652371/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1014 20:15:14.570516  368634 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/old-k8s-version-652371/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1014 20:15:14.732208  368634 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/old-k8s-version-652371/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1014 20:15:15.054214  368634 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/old-k8s-version-652371/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:149: (dbg) Done: kubectl --context auto-880673 replace --force -f testdata/netcat-deployment.yaml: (1.688876959s)
I1014 20:15:15.417340  368634 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 0 spec.replicas 1 status.replicas 0
I1014 20:15:15.421567  368634 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-kpvkm" [1143d551-4b96-4d3a-b449-2fd7379daaf3] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1014 20:15:15.695960  368634 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/old-k8s-version-652371/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1014 20:15:16.978228  368634 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/old-k8s-version-652371/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1014 20:15:19.539820  368634 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/old-k8s-version-652371/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "netcat-cd4db9dbf-kpvkm" [1143d551-4b96-4d3a-b449-2fd7379daaf3] Running
E1014 20:15:24.661947  368634 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/old-k8s-version-652371/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 10.005068168s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (12.06s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-880673 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-880673 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-880673 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (76.83s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-880673 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
E1014 20:15:54.972242  368634 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/default-k8s-diff-port-062000/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1014 20:15:54.978743  368634 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/default-k8s-diff-port-062000/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1014 20:15:54.990242  368634 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/default-k8s-diff-port-062000/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1014 20:15:55.011788  368634 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/default-k8s-diff-port-062000/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1014 20:15:55.053358  368634 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/default-k8s-diff-port-062000/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1014 20:15:55.134878  368634 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/default-k8s-diff-port-062000/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1014 20:15:55.296866  368634 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/default-k8s-diff-port-062000/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1014 20:15:55.385416  368634 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/old-k8s-version-652371/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1014 20:15:55.618962  368634 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/default-k8s-diff-port-062000/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1014 20:15:56.260924  368634 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/default-k8s-diff-port-062000/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1014 20:15:57.543224  368634 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/default-k8s-diff-port-062000/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1014 20:16:00.105351  368634 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/default-k8s-diff-port-062000/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1014 20:16:05.226886  368634 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/default-k8s-diff-port-062000/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1014 20:16:05.862775  368634 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/functional-416610/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1014 20:16:15.469055  368634 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/default-k8s-diff-port-062000/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-880673 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m16.832137771s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (76.83s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:352: "kindnet-6vx86" [bef275d4-9e61-411a-9d71-9e177c37bf86] Running
E1014 20:16:22.790957  368634 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/functional-416610/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.005279374s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-880673 "pgrep -a kubelet"
I1014 20:16:26.689388  368634 config.go:182] Loaded profile config "kindnet-880673": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (13.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-880673 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-hv7hd" [67d6529d-4179-4bad-b254-5a822dc924e4] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-hv7hd" [67d6529d-4179-4bad-b254-5a822dc924e4] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 13.004916753s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (13.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:352: "calico-node-hq4f2" [59482a16-eb52-4260-9e7e-c423e9706a69] Running
E1014 20:16:35.950954  368634 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/default-k8s-diff-port-062000/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1014 20:16:36.347189  368634 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/old-k8s-version-652371/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.004126661s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-880673 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-880673 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-880673 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-880673 "pgrep -a kubelet"
I1014 20:16:41.509515  368634 config.go:182] Loaded profile config "calico-880673": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (31.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-880673 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-lhs5g" [fd3ce275-1bad-4b4e-9ad1-8f34fac1170d] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1014 20:16:42.188088  368634 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/no-preload-905132/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1014 20:16:42.194584  368634 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/no-preload-905132/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1014 20:16:42.206799  368634 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/no-preload-905132/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1014 20:16:42.228276  368634 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/no-preload-905132/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1014 20:16:42.269794  368634 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/no-preload-905132/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1014 20:16:42.351388  368634 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/no-preload-905132/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1014 20:16:42.512936  368634 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/no-preload-905132/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1014 20:16:42.834532  368634 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/no-preload-905132/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1014 20:16:43.476478  368634 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/no-preload-905132/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1014 20:16:44.758118  368634 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/no-preload-905132/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1014 20:16:47.320184  368634 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/no-preload-905132/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1014 20:16:52.442479  368634 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/no-preload-905132/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "netcat-cd4db9dbf-lhs5g" [fd3ce275-1bad-4b4e-9ad1-8f34fac1170d] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 31.00496461s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (31.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (86.95s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-880673 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-880673 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m26.948016969s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (86.95s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-880673 "pgrep -a kubelet"
I1014 20:17:00.807762  368634 config.go:182] Loaded profile config "custom-flannel-880673": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (10.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-880673 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-nkhf5" [24bcecc6-e482-4e3f-97c7-50ef1cc53136] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1014 20:17:02.684103  368634 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/no-preload-905132/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "netcat-cd4db9dbf-nkhf5" [24bcecc6-e482-4e3f-97c7-50ef1cc53136] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 10.006049336s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (10.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-880673 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-880673 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-880673 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-880673 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-880673 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-880673 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (78.89s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-880673 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-880673 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m18.889085574s)
--- PASS: TestNetworkPlugins/group/flannel/Start (78.89s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (104.83s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-880673 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
E1014 20:17:58.269470  368634 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/old-k8s-version-652371/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1014 20:18:04.128023  368634 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/no-preload-905132/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-880673 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m44.826133842s)
--- PASS: TestNetworkPlugins/group/bridge/Start (104.83s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-880673 "pgrep -a kubelet"
I1014 20:18:23.816824  368634 config.go:182] Loaded profile config "enable-default-cni-880673": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.46s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-880673 replace --force -f testdata/netcat-deployment.yaml
I1014 20:18:24.255867  368634 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-mxrq6" [4c23b13a-4503-4a8a-9315-4157484e1e02] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1014 20:18:26.583655  368634 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/addons-082251/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "netcat-cd4db9dbf-mxrq6" [4c23b13a-4503-4a8a-9315-4157484e1e02] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 10.005921694s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.46s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-880673 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-880673 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-880673 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:352: "kube-flannel-ds-t4289" [81da8ebc-c60f-4111-bdc1-0827e4e5a098] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.004848099s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-880673 "pgrep -a kubelet"
I1014 20:18:54.962848  368634 config.go:182] Loaded profile config "flannel-880673": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (10.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-880673 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-khspn" [39e507e1-fbee-4672-b38b-3a78942c4b2b] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-khspn" [39e507e1-fbee-4672-b38b-3a78942c4b2b] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 10.004364513s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (10.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-880673 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-880673 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-880673 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-880673 "pgrep -a kubelet"
I1014 20:19:17.987664  368634 config.go:182] Loaded profile config "bridge-880673": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (10.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-880673 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-6tb6h" [a3f7dad8-02a0-4a8d-b8d2-03c21fb4b674] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-6tb6h" [a3f7dad8-02a0-4a8d-b8d2-03c21fb4b674] Running
E1014 20:19:26.050157  368634 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/no-preload-905132/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 10.004990696s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (10.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-880673 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-880673 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-880673 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.13s)
E1014 20:20:14.407812  368634 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/old-k8s-version-652371/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1014 20:20:15.097249  368634 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/auto-880673/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1014 20:20:15.103735  368634 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/auto-880673/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1014 20:20:15.115241  368634 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/auto-880673/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1014 20:20:15.136786  368634 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/auto-880673/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1014 20:20:15.178355  368634 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/auto-880673/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1014 20:20:15.259839  368634 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/auto-880673/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1014 20:20:15.421875  368634 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/auto-880673/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1014 20:20:15.743709  368634 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/auto-880673/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1014 20:20:16.385888  368634 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/auto-880673/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1014 20:20:17.667483  368634 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/auto-880673/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1014 20:20:20.228851  368634 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/auto-880673/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1014 20:20:25.351075  368634 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/auto-880673/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1014 20:20:35.592973  368634 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/auto-880673/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1014 20:20:42.111453  368634 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/old-k8s-version-652371/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1014 20:20:54.972044  368634 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/default-k8s-diff-port-062000/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1014 20:20:56.074702  368634 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/auto-880673/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1014 20:21:20.451336  368634 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/kindnet-880673/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1014 20:21:20.457828  368634 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/kindnet-880673/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1014 20:21:20.469253  368634 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/kindnet-880673/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1014 20:21:20.490647  368634 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/kindnet-880673/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1014 20:21:20.532183  368634 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/kindnet-880673/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1014 20:21:20.613730  368634 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/kindnet-880673/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1014 20:21:20.775553  368634 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/kindnet-880673/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1014 20:21:21.097524  368634 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/kindnet-880673/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1014 20:21:21.739605  368634 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/kindnet-880673/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1014 20:21:22.676888  368634 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/default-k8s-diff-port-062000/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1014 20:21:22.790589  368634 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/functional-416610/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1014 20:21:23.021890  368634 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/kindnet-880673/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1014 20:21:25.583778  368634 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/kindnet-880673/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1014 20:21:30.705391  368634 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/kindnet-880673/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1014 20:21:35.269828  368634 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/calico-880673/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1014 20:21:35.276230  368634 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/calico-880673/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1014 20:21:35.287609  368634 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/calico-880673/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1014 20:21:35.309106  368634 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/calico-880673/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1014 20:21:35.350539  368634 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/calico-880673/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1014 20:21:35.432062  368634 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/calico-880673/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1014 20:21:35.593664  368634 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/calico-880673/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1014 20:21:35.915347  368634 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/calico-880673/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1014 20:21:36.557465  368634 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/calico-880673/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1014 20:21:37.036298  368634 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/auto-880673/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1014 20:21:37.839109  368634 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/calico-880673/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1014 20:21:40.401058  368634 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/calico-880673/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1014 20:21:40.947251  368634 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/kindnet-880673/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1014 20:21:42.187744  368634 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/no-preload-905132/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1014 20:21:45.522996  368634 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/calico-880673/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1014 20:21:55.765054  368634 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/calico-880673/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1014 20:22:01.051537  368634 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/custom-flannel-880673/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1014 20:22:01.058007  368634 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/custom-flannel-880673/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1014 20:22:01.069456  368634 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/custom-flannel-880673/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1014 20:22:01.090969  368634 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/custom-flannel-880673/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1014 20:22:01.132455  368634 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/custom-flannel-880673/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1014 20:22:01.213982  368634 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/custom-flannel-880673/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1014 20:22:01.375634  368634 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/custom-flannel-880673/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1014 20:22:01.429169  368634 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/kindnet-880673/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1014 20:22:01.697914  368634 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/custom-flannel-880673/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1014 20:22:02.339641  368634 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/custom-flannel-880673/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1014 20:22:03.621479  368634 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/custom-flannel-880673/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1014 20:22:06.182975  368634 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/custom-flannel-880673/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1014 20:22:09.892465  368634 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/no-preload-905132/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1014 20:22:11.305264  368634 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/custom-flannel-880673/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1014 20:22:16.247111  368634 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/calico-880673/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1014 20:22:21.547196  368634 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/custom-flannel-880673/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1014 20:22:42.029523  368634 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/custom-flannel-880673/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1014 20:22:42.390493  368634 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/kindnet-880673/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1014 20:22:57.208579  368634 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/calico-880673/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1014 20:22:58.957778  368634 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/auto-880673/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1014 20:23:22.991847  368634 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/custom-flannel-880673/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1014 20:23:24.229088  368634 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/enable-default-cni-880673/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1014 20:23:24.235565  368634 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/enable-default-cni-880673/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1014 20:23:24.247074  368634 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/enable-default-cni-880673/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1014 20:23:24.268538  368634 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/enable-default-cni-880673/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1014 20:23:24.310023  368634 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/enable-default-cni-880673/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1014 20:23:24.391539  368634 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/enable-default-cni-880673/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1014 20:23:24.553437  368634 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/enable-default-cni-880673/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1014 20:23:24.874781  368634 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/enable-default-cni-880673/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1014 20:23:25.516535  368634 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/enable-default-cni-880673/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1014 20:23:26.583039  368634 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/addons-082251/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1014 20:23:26.798691  368634 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/enable-default-cni-880673/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1014 20:23:29.360403  368634 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/enable-default-cni-880673/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1014 20:23:34.482714  368634 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/enable-default-cni-880673/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1014 20:23:44.724588  368634 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/enable-default-cni-880673/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1014 20:23:48.746889  368634 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/flannel-880673/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1014 20:23:48.753376  368634 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/flannel-880673/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1014 20:23:48.764757  368634 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/flannel-880673/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1014 20:23:48.786165  368634 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/flannel-880673/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1014 20:23:48.828376  368634 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/flannel-880673/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1014 20:23:48.909912  368634 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/flannel-880673/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1014 20:23:49.071544  368634 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/flannel-880673/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1014 20:23:49.393363  368634 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/flannel-880673/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1014 20:23:50.034799  368634 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/flannel-880673/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1014 20:23:51.317078  368634 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/flannel-880673/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1014 20:23:53.879150  368634 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/flannel-880673/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1014 20:23:59.000557  368634 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/flannel-880673/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1014 20:24:04.311936  368634 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/kindnet-880673/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1014 20:24:05.206658  368634 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/enable-default-cni-880673/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1014 20:24:09.242569  368634 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/flannel-880673/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1014 20:24:18.230250  368634 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/bridge-880673/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1014 20:24:18.236741  368634 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/bridge-880673/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1014 20:24:18.248171  368634 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/bridge-880673/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1014 20:24:18.269623  368634 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/bridge-880673/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1014 20:24:18.311104  368634 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/bridge-880673/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1014 20:24:18.392583  368634 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/bridge-880673/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1014 20:24:18.554199  368634 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/bridge-880673/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1014 20:24:18.875565  368634 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/bridge-880673/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1014 20:24:19.130206  368634 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/calico-880673/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1014 20:24:19.517028  368634 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/bridge-880673/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1014 20:24:20.799099  368634 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/bridge-880673/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1014 20:24:23.360553  368634 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/bridge-880673/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1014 20:24:28.482670  368634 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/bridge-880673/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1014 20:24:29.724714  368634 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/flannel-880673/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1014 20:24:38.724070  368634 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/bridge-880673/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1014 20:24:44.913777  368634 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/custom-flannel-880673/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1014 20:24:46.168962  368634 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/enable-default-cni-880673/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1014 20:24:49.666913  368634 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/addons-082251/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1014 20:24:59.205578  368634 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/bridge-880673/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1014 20:25:10.686516  368634 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/flannel-880673/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1014 20:25:14.408396  368634 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/old-k8s-version-652371/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1014 20:25:15.097349  368634 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/auto-880673/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-158674 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (2.69s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-158674 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-158674 -n embed-certs-158674
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-158674 -n embed-certs-158674: exit status 2 (261.073589ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-158674 -n embed-certs-158674
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-158674 -n embed-certs-158674: exit status 2 (264.857541ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-158674 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-158674 -n embed-certs-158674
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-158674 -n embed-certs-158674
--- PASS: TestStartStop/group/embed-certs/serial/Pause (2.69s)

                                                
                                    

Test skip (40/324)

Order skiped test Duration
5 TestDownloadOnly/v1.28.0/cached-images 0
6 TestDownloadOnly/v1.28.0/binaries 0
7 TestDownloadOnly/v1.28.0/kubectl 0
14 TestDownloadOnly/v1.34.1/cached-images 0
15 TestDownloadOnly/v1.34.1/binaries 0
16 TestDownloadOnly/v1.34.1/kubectl 0
20 TestDownloadOnlyKic 0
29 TestAddons/serial/Volcano 0.34
33 TestAddons/serial/GCPAuth/RealCredentials 0
40 TestAddons/parallel/Olm 0
47 TestAddons/parallel/AmdGpuDevicePlugin 0
51 TestDockerFlags 0
54 TestDockerEnvContainerd 0
56 TestHyperKitDriverInstallOrUpdate 0
57 TestHyperkitDriverSkipUpgrade 0
108 TestFunctional/parallel/DockerEnv 0
109 TestFunctional/parallel/PodmanEnv 0
146 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.01
147 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
148 TestFunctional/parallel/TunnelCmd/serial/WaitService 0.01
149 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
150 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
151 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
152 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
153 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.01
157 TestFunctionalNewestKubernetes 0
158 TestGvisorAddon 0
180 TestImageBuild 0
207 TestKicCustomNetwork 0
208 TestKicExistingNetwork 0
209 TestKicCustomSubnet 0
210 TestKicStaticIP 0
242 TestChangeNoneUser 0
245 TestScheduledStopWindows 0
247 TestSkaffold 0
249 TestInsufficientStorage 0
253 TestMissingContainerUpgrade 0
271 TestStartStop/group/disable-driver-mounts 0.18
278 TestNetworkPlugins/group/kubenet 3.33
286 TestNetworkPlugins/group/cilium 3.58
x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.34.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.34.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.34.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:219: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/serial/Volcano (0.34s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:850: skipping: crio not supported
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-082251 addons disable volcano --alsologtostderr -v=1
--- SKIP: TestAddons/serial/Volcano (0.34s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:759: This test requires a GCE instance (excluding Cloud Shell) with a container based driver
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:483: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (0s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:1033: skip amd gpu test on all but docker driver and amd64 platform
--- SKIP: TestAddons/parallel/AmdGpuDevicePlugin (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:114: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:178: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes
functional_test.go:82: 
--- SKIP: TestFunctionalNewestKubernetes (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-358373" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-358373
--- SKIP: TestStartStop/group/disable-driver-mounts (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:636: 
----------------------- debugLogs start: kubenet-880673 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-880673

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-880673

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-880673

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-880673

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-880673

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-880673

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-880673

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-880673

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-880673

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-880673

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-880673" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-880673"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-880673" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-880673"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-880673" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-880673"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-880673

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-880673" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-880673"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-880673" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-880673"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-880673" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-880673" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-880673" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-880673" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-880673" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-880673" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-880673" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-880673" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-880673" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-880673"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-880673" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-880673"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-880673" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-880673"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-880673" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-880673"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-880673" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-880673"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-880673" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-880673" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-880673" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-880673" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-880673"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-880673" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-880673"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-880673" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-880673"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-880673" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-880673"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-880673" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-880673"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21409-364627/.minikube/ca.crt
extensions:
- extension:
last-update: Tue, 14 Oct 2025 20:07:13 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.83.247:8443
name: kubernetes-upgrade-425560
contexts:
- context:
cluster: kubernetes-upgrade-425560
extensions:
- extension:
last-update: Tue, 14 Oct 2025 20:07:13 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: kubernetes-upgrade-425560
name: kubernetes-upgrade-425560
current-context: kubernetes-upgrade-425560
kind: Config
users:
- name: kubernetes-upgrade-425560
user:
client-certificate: /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/kubernetes-upgrade-425560/client.crt
client-key: /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/kubernetes-upgrade-425560/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-880673

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-880673" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-880673"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-880673" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-880673"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-880673" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-880673"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-880673" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-880673"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-880673" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-880673"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-880673" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-880673"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-880673" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-880673"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-880673" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-880673"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-880673" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-880673"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-880673" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-880673"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-880673" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-880673"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-880673" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-880673"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-880673" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-880673"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-880673" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-880673"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-880673" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-880673"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-880673" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-880673"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-880673" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-880673"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-880673" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-880673"

                                                
                                                
----------------------- debugLogs end: kubenet-880673 [took: 3.1695844s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-880673" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-880673
--- SKIP: TestNetworkPlugins/group/kubenet (3.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.58s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:636: 
----------------------- debugLogs start: cilium-880673 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-880673

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-880673

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-880673

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-880673

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-880673

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-880673

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-880673

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-880673

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-880673

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-880673

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-880673" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-880673"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-880673" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-880673"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-880673" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-880673"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-880673

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-880673" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-880673"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-880673" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-880673"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-880673" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-880673" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-880673" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-880673" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-880673" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-880673" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-880673" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-880673" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-880673" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-880673"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-880673" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-880673"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-880673" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-880673"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-880673" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-880673"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-880673" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-880673"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-880673

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-880673

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-880673" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-880673" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-880673

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-880673

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-880673" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-880673" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-880673" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-880673" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-880673" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-880673" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-880673"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-880673" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-880673"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-880673" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-880673"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-880673" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-880673"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-880673" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-880673"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21409-364627/.minikube/ca.crt
extensions:
- extension:
last-update: Tue, 14 Oct 2025 20:07:13 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.83.247:8443
name: kubernetes-upgrade-425560
contexts:
- context:
cluster: kubernetes-upgrade-425560
extensions:
- extension:
last-update: Tue, 14 Oct 2025 20:07:13 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: kubernetes-upgrade-425560
name: kubernetes-upgrade-425560
current-context: kubernetes-upgrade-425560
kind: Config
users:
- name: kubernetes-upgrade-425560
user:
client-certificate: /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/kubernetes-upgrade-425560/client.crt
client-key: /home/jenkins/minikube-integration/21409-364627/.minikube/profiles/kubernetes-upgrade-425560/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-880673

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-880673" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-880673"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-880673" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-880673"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-880673" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-880673"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-880673" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-880673"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-880673" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-880673"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-880673" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-880673"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-880673" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-880673"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-880673" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-880673"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-880673" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-880673"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-880673" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-880673"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-880673" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-880673"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-880673" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-880673"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-880673" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-880673"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-880673" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-880673"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-880673" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-880673"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-880673" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-880673"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-880673" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-880673"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-880673" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-880673"

                                                
                                                
----------------------- debugLogs end: cilium-880673 [took: 3.402600349s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-880673" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-880673
--- SKIP: TestNetworkPlugins/group/cilium (3.58s)

                                                
                                    
Copied to clipboard