Test Report: KVM_Linux_crio 21683

                    
                      1b58c48826b6fb4d6f7297e87780eae465bc5f37:2025-10-19:41984
                    
                

Test fail (4/324)

Order failed test Duration
37 TestAddons/parallel/Ingress 159.33
131 TestFunctional/parallel/ImageCommands/ImageRemove 2.95
244 TestPreload 137.15
256 TestPause/serial/SecondStartNoReconfiguration 58.73
x
+
TestAddons/parallel/Ingress (159.33s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-305823 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-305823 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-305823 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:352: "nginx" [7b84d6d2-a870-4484-b316-6000b51924a2] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx" [7b84d6d2-a870-4484-b316-6000b51924a2] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 12.004043835s
I1019 16:25:30.012318  278280 kapi.go:150] Service nginx in namespace default found.
addons_test.go:264: (dbg) Run:  out/minikube-linux-amd64 -p addons-305823 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:264: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-305823 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m15.087552s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:280: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:288: (dbg) Run:  kubectl --context addons-305823 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-amd64 -p addons-305823 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.39.11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/Ingress]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-305823 -n addons-305823
helpers_test.go:252: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p addons-305823 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p addons-305823 logs -n 25: (1.31119165s)
helpers_test.go:260: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                                ARGS                                                                                                                                                                                                                                                │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ delete  │ -p download-only-014353                                                                                                                                                                                                                                                                                                                                                                                                                                                                            │ download-only-014353 │ jenkins │ v1.37.0 │ 19 Oct 25 16:21 UTC │ 19 Oct 25 16:21 UTC │
	│ start   │ --download-only -p binary-mirror-861037 --alsologtostderr --binary-mirror http://127.0.0.1:46113 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                                                                                                                                                                                                                                                                                                               │ binary-mirror-861037 │ jenkins │ v1.37.0 │ 19 Oct 25 16:21 UTC │                     │
	│ delete  │ -p binary-mirror-861037                                                                                                                                                                                                                                                                                                                                                                                                                                                                            │ binary-mirror-861037 │ jenkins │ v1.37.0 │ 19 Oct 25 16:21 UTC │ 19 Oct 25 16:21 UTC │
	│ addons  │ disable dashboard -p addons-305823                                                                                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-305823        │ jenkins │ v1.37.0 │ 19 Oct 25 16:21 UTC │                     │
	│ addons  │ enable dashboard -p addons-305823                                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-305823        │ jenkins │ v1.37.0 │ 19 Oct 25 16:21 UTC │                     │
	│ start   │ -p addons-305823 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-305823        │ jenkins │ v1.37.0 │ 19 Oct 25 16:21 UTC │ 19 Oct 25 16:24 UTC │
	│ addons  │ addons-305823 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-305823        │ jenkins │ v1.37.0 │ 19 Oct 25 16:24 UTC │ 19 Oct 25 16:24 UTC │
	│ addons  │ addons-305823 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-305823        │ jenkins │ v1.37.0 │ 19 Oct 25 16:25 UTC │ 19 Oct 25 16:25 UTC │
	│ addons  │ addons-305823 addons disable inspektor-gadget --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                               │ addons-305823        │ jenkins │ v1.37.0 │ 19 Oct 25 16:25 UTC │ 19 Oct 25 16:25 UTC │
	│ addons  │ configure registry-creds -f ./testdata/addons_testconfig.json -p addons-305823                                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-305823        │ jenkins │ v1.37.0 │ 19 Oct 25 16:25 UTC │ 19 Oct 25 16:25 UTC │
	│ addons  │ addons-305823 addons disable registry-creds --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-305823        │ jenkins │ v1.37.0 │ 19 Oct 25 16:25 UTC │ 19 Oct 25 16:25 UTC │
	│ addons  │ addons-305823 addons disable metrics-server --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-305823        │ jenkins │ v1.37.0 │ 19 Oct 25 16:25 UTC │ 19 Oct 25 16:25 UTC │
	│ ssh     │ addons-305823 ssh curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'                                                                                                                                                                                                                                                                                                                                                                                                                           │ addons-305823        │ jenkins │ v1.37.0 │ 19 Oct 25 16:25 UTC │                     │
	│ ip      │ addons-305823 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                                                   │ addons-305823        │ jenkins │ v1.37.0 │ 19 Oct 25 16:25 UTC │ 19 Oct 25 16:25 UTC │
	│ addons  │ addons-305823 addons disable registry --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-305823        │ jenkins │ v1.37.0 │ 19 Oct 25 16:25 UTC │ 19 Oct 25 16:25 UTC │
	│ addons  │ addons-305823 addons disable yakd --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                           │ addons-305823        │ jenkins │ v1.37.0 │ 19 Oct 25 16:25 UTC │ 19 Oct 25 16:25 UTC │
	│ ssh     │ addons-305823 ssh cat /opt/local-path-provisioner/pvc-9e7ebf85-26d7-46d3-bf9a-511475c7798b_default_test-pvc/file1                                                                                                                                                                                                                                                                                                                                                                                  │ addons-305823        │ jenkins │ v1.37.0 │ 19 Oct 25 16:25 UTC │ 19 Oct 25 16:25 UTC │
	│ addons  │ addons-305823 addons disable storage-provisioner-rancher --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                    │ addons-305823        │ jenkins │ v1.37.0 │ 19 Oct 25 16:25 UTC │ 19 Oct 25 16:25 UTC │
	│ addons  │ addons-305823 addons disable nvidia-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                           │ addons-305823        │ jenkins │ v1.37.0 │ 19 Oct 25 16:25 UTC │ 19 Oct 25 16:25 UTC │
	│ addons  │ enable headlamp -p addons-305823 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                            │ addons-305823        │ jenkins │ v1.37.0 │ 19 Oct 25 16:25 UTC │ 19 Oct 25 16:25 UTC │
	│ addons  │ addons-305823 addons disable cloud-spanner --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-305823        │ jenkins │ v1.37.0 │ 19 Oct 25 16:25 UTC │ 19 Oct 25 16:25 UTC │
	│ addons  │ addons-305823 addons disable headlamp --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-305823        │ jenkins │ v1.37.0 │ 19 Oct 25 16:26 UTC │ 19 Oct 25 16:26 UTC │
	│ addons  │ addons-305823 addons disable volumesnapshots --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                                │ addons-305823        │ jenkins │ v1.37.0 │ 19 Oct 25 16:26 UTC │ 19 Oct 25 16:26 UTC │
	│ addons  │ addons-305823 addons disable csi-hostpath-driver --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                            │ addons-305823        │ jenkins │ v1.37.0 │ 19 Oct 25 16:26 UTC │ 19 Oct 25 16:26 UTC │
	│ ip      │ addons-305823 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                                                   │ addons-305823        │ jenkins │ v1.37.0 │ 19 Oct 25 16:27 UTC │ 19 Oct 25 16:27 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/19 16:21:43
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1019 16:21:43.172436  278987 out.go:360] Setting OutFile to fd 1 ...
	I1019 16:21:43.172684  278987 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 16:21:43.172692  278987 out.go:374] Setting ErrFile to fd 2...
	I1019 16:21:43.172696  278987 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 16:21:43.172910  278987 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21683-274250/.minikube/bin
	I1019 16:21:43.173446  278987 out.go:368] Setting JSON to false
	I1019 16:21:43.174273  278987 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":7445,"bootTime":1760883458,"procs":172,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1019 16:21:43.174371  278987 start.go:143] virtualization: kvm guest
	I1019 16:21:43.176070  278987 out.go:179] * [addons-305823] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1019 16:21:43.177357  278987 notify.go:221] Checking for updates...
	I1019 16:21:43.177368  278987 out.go:179]   - MINIKUBE_LOCATION=21683
	I1019 16:21:43.178466  278987 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1019 16:21:43.179687  278987 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21683-274250/kubeconfig
	I1019 16:21:43.181203  278987 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21683-274250/.minikube
	I1019 16:21:43.182229  278987 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1019 16:21:43.183181  278987 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1019 16:21:43.184291  278987 driver.go:422] Setting default libvirt URI to qemu:///system
	I1019 16:21:43.214680  278987 out.go:179] * Using the kvm2 driver based on user configuration
	I1019 16:21:43.215736  278987 start.go:309] selected driver: kvm2
	I1019 16:21:43.215754  278987 start.go:930] validating driver "kvm2" against <nil>
	I1019 16:21:43.215765  278987 start.go:941] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1019 16:21:43.216468  278987 install.go:66] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1019 16:21:43.216564  278987 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21683-274250/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1019 16:21:43.230459  278987 install.go:163] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I1019 16:21:43.230489  278987 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21683-274250/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1019 16:21:43.244276  278987 install.go:163] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I1019 16:21:43.244335  278987 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1019 16:21:43.244610  278987 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1019 16:21:43.244643  278987 cni.go:84] Creating CNI manager for ""
	I1019 16:21:43.244686  278987 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1019 16:21:43.244695  278987 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1019 16:21:43.244742  278987 start.go:353] cluster config:
	{Name:addons-305823 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-305823 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: A
utoPauseInterval:1m0s}
	I1019 16:21:43.244830  278987 iso.go:125] acquiring lock: {Name:mk7c0069e2cf0a68d4955dec96c59ff341a488dd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1019 16:21:43.246449  278987 out.go:179] * Starting "addons-305823" primary control-plane node in "addons-305823" cluster
	I1019 16:21:43.247497  278987 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1019 16:21:43.247541  278987 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21683-274250/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1019 16:21:43.247551  278987 cache.go:59] Caching tarball of preloaded images
	I1019 16:21:43.247622  278987 preload.go:233] Found /home/jenkins/minikube-integration/21683-274250/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1019 16:21:43.247632  278987 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1019 16:21:43.247923  278987 profile.go:143] Saving config to /home/jenkins/minikube-integration/21683-274250/.minikube/profiles/addons-305823/config.json ...
	I1019 16:21:43.247942  278987 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-274250/.minikube/profiles/addons-305823/config.json: {Name:mk29b9c2e7643fef741c4e5f1fd2df154e228ec6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 16:21:43.248103  278987 start.go:360] acquireMachinesLock for addons-305823: {Name:mk3b19946e20646ec6cf08c56ebb92a1f48fa1bb Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1019 16:21:43.248606  278987 start.go:364] duration metric: took 485.783µs to acquireMachinesLock for "addons-305823"
	I1019 16:21:43.248630  278987 start.go:93] Provisioning new machine with config: &{Name:addons-305823 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.1 ClusterName:addons-305823 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1019 16:21:43.248689  278987 start.go:125] createHost starting for "" (driver="kvm2")
	I1019 16:21:43.250049  278987 out.go:252] * Creating kvm2 VM (CPUs=2, Memory=4096MB, Disk=20000MB) ...
	I1019 16:21:43.250183  278987 main.go:143] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1019 16:21:43.250227  278987 main.go:143] libmachine: Launching plugin server for driver kvm2
	I1019 16:21:43.263842  278987 main.go:143] libmachine: Plugin server listening at address 127.0.0.1:40421
	I1019 16:21:43.264399  278987 main.go:143] libmachine: () Calling .GetVersion
	I1019 16:21:43.264957  278987 main.go:143] libmachine: Using API Version  1
	I1019 16:21:43.264993  278987 main.go:143] libmachine: () Calling .SetConfigRaw
	I1019 16:21:43.265420  278987 main.go:143] libmachine: () Calling .GetMachineName
	I1019 16:21:43.265625  278987 main.go:143] libmachine: (addons-305823) Calling .GetMachineName
	I1019 16:21:43.265808  278987 main.go:143] libmachine: (addons-305823) Calling .DriverName
	I1019 16:21:43.265969  278987 start.go:159] libmachine.API.Create for "addons-305823" (driver="kvm2")
	I1019 16:21:43.266013  278987 client.go:171] LocalClient.Create starting
	I1019 16:21:43.266065  278987 main.go:143] libmachine: Creating CA: /home/jenkins/minikube-integration/21683-274250/.minikube/certs/ca.pem
	I1019 16:21:43.602315  278987 main.go:143] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21683-274250/.minikube/certs/cert.pem
	I1019 16:21:43.717002  278987 main.go:143] libmachine: Running pre-create checks...
	I1019 16:21:43.717027  278987 main.go:143] libmachine: (addons-305823) Calling .PreCreateCheck
	I1019 16:21:43.717695  278987 main.go:143] libmachine: (addons-305823) Calling .GetConfigRaw
	I1019 16:21:43.718201  278987 main.go:143] libmachine: Creating machine...
	I1019 16:21:43.718224  278987 main.go:143] libmachine: (addons-305823) Calling .Create
	I1019 16:21:43.718417  278987 main.go:143] libmachine: (addons-305823) creating domain...
	I1019 16:21:43.718440  278987 main.go:143] libmachine: (addons-305823) creating network...
	I1019 16:21:43.720051  278987 main.go:143] libmachine: (addons-305823) DBG | found existing default network
	I1019 16:21:43.720270  278987 main.go:143] libmachine: (addons-305823) DBG | <network>
	I1019 16:21:43.720296  278987 main.go:143] libmachine: (addons-305823) DBG |   <name>default</name>
	I1019 16:21:43.720336  278987 main.go:143] libmachine: (addons-305823) DBG |   <uuid>c61344c2-dba2-46dd-a21a-34776d235985</uuid>
	I1019 16:21:43.720373  278987 main.go:143] libmachine: (addons-305823) DBG |   <forward mode='nat'>
	I1019 16:21:43.720380  278987 main.go:143] libmachine: (addons-305823) DBG |     <nat>
	I1019 16:21:43.720387  278987 main.go:143] libmachine: (addons-305823) DBG |       <port start='1024' end='65535'/>
	I1019 16:21:43.720473  278987 main.go:143] libmachine: (addons-305823) DBG |     </nat>
	I1019 16:21:43.720532  278987 main.go:143] libmachine: (addons-305823) DBG |   </forward>
	I1019 16:21:43.720551  278987 main.go:143] libmachine: (addons-305823) DBG |   <bridge name='virbr0' stp='on' delay='0'/>
	I1019 16:21:43.720568  278987 main.go:143] libmachine: (addons-305823) DBG |   <mac address='52:54:00:10:a2:1d'/>
	I1019 16:21:43.720579  278987 main.go:143] libmachine: (addons-305823) DBG |   <ip address='192.168.122.1' netmask='255.255.255.0'>
	I1019 16:21:43.720594  278987 main.go:143] libmachine: (addons-305823) DBG |     <dhcp>
	I1019 16:21:43.720605  278987 main.go:143] libmachine: (addons-305823) DBG |       <range start='192.168.122.2' end='192.168.122.254'/>
	I1019 16:21:43.720632  278987 main.go:143] libmachine: (addons-305823) DBG |     </dhcp>
	I1019 16:21:43.720646  278987 main.go:143] libmachine: (addons-305823) DBG |   </ip>
	I1019 16:21:43.720653  278987 main.go:143] libmachine: (addons-305823) DBG | </network>
	I1019 16:21:43.720672  278987 main.go:143] libmachine: (addons-305823) DBG | 
	I1019 16:21:43.720937  278987 main.go:143] libmachine: (addons-305823) DBG | I1019 16:21:43.720780  279015 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0000136b0}
	I1019 16:21:43.720966  278987 main.go:143] libmachine: (addons-305823) DBG | defining private network:
	I1019 16:21:43.720977  278987 main.go:143] libmachine: (addons-305823) DBG | 
	I1019 16:21:43.720997  278987 main.go:143] libmachine: (addons-305823) DBG | <network>
	I1019 16:21:43.721007  278987 main.go:143] libmachine: (addons-305823) DBG |   <name>mk-addons-305823</name>
	I1019 16:21:43.721019  278987 main.go:143] libmachine: (addons-305823) DBG |   <dns enable='no'/>
	I1019 16:21:43.721030  278987 main.go:143] libmachine: (addons-305823) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I1019 16:21:43.721037  278987 main.go:143] libmachine: (addons-305823) DBG |     <dhcp>
	I1019 16:21:43.721056  278987 main.go:143] libmachine: (addons-305823) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I1019 16:21:43.721066  278987 main.go:143] libmachine: (addons-305823) DBG |     </dhcp>
	I1019 16:21:43.721092  278987 main.go:143] libmachine: (addons-305823) DBG |   </ip>
	I1019 16:21:43.721113  278987 main.go:143] libmachine: (addons-305823) DBG | </network>
	I1019 16:21:43.721125  278987 main.go:143] libmachine: (addons-305823) DBG | 
	I1019 16:21:43.727169  278987 main.go:143] libmachine: (addons-305823) DBG | creating private network mk-addons-305823 192.168.39.0/24...
	I1019 16:21:43.795692  278987 main.go:143] libmachine: (addons-305823) DBG | private network mk-addons-305823 192.168.39.0/24 created
	I1019 16:21:43.795996  278987 main.go:143] libmachine: (addons-305823) DBG | <network>
	I1019 16:21:43.796039  278987 main.go:143] libmachine: (addons-305823) setting up store path in /home/jenkins/minikube-integration/21683-274250/.minikube/machines/addons-305823 ...
	I1019 16:21:43.796056  278987 main.go:143] libmachine: (addons-305823) DBG |   <name>mk-addons-305823</name>
	I1019 16:21:43.796069  278987 main.go:143] libmachine: (addons-305823) DBG |   <uuid>28f88028-fc41-4e54-97ed-0371b9f47833</uuid>
	I1019 16:21:43.796078  278987 main.go:143] libmachine: (addons-305823) DBG |   <bridge name='virbr1' stp='on' delay='0'/>
	I1019 16:21:43.796095  278987 main.go:143] libmachine: (addons-305823) building disk image from file:///home/jenkins/minikube-integration/21683-274250/.minikube/cache/iso/amd64/minikube-v1.37.0-1760609724-21757-amd64.iso
	I1019 16:21:43.796107  278987 main.go:143] libmachine: (addons-305823) DBG |   <mac address='52:54:00:c8:d4:8b'/>
	I1019 16:21:43.796118  278987 main.go:143] libmachine: (addons-305823) DBG |   <dns enable='no'/>
	I1019 16:21:43.796130  278987 main.go:143] libmachine: (addons-305823) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I1019 16:21:43.796142  278987 main.go:143] libmachine: (addons-305823) DBG |     <dhcp>
	I1019 16:21:43.796154  278987 main.go:143] libmachine: (addons-305823) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I1019 16:21:43.796164  278987 main.go:143] libmachine: (addons-305823) DBG |     </dhcp>
	I1019 16:21:43.796173  278987 main.go:143] libmachine: (addons-305823) DBG |   </ip>
	I1019 16:21:43.796177  278987 main.go:143] libmachine: (addons-305823) DBG | </network>
	I1019 16:21:43.796184  278987 main.go:143] libmachine: (addons-305823) DBG | 
	I1019 16:21:43.796201  278987 main.go:143] libmachine: (addons-305823) DBG | I1019 16:21:43.796022  279015 common.go:150] Making disk image using store path: /home/jenkins/minikube-integration/21683-274250/.minikube
	I1019 16:21:43.796258  278987 main.go:143] libmachine: (addons-305823) Downloading /home/jenkins/minikube-integration/21683-274250/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/21683-274250/.minikube/cache/iso/amd64/minikube-v1.37.0-1760609724-21757-amd64.iso...
	I1019 16:21:44.099256  278987 main.go:143] libmachine: (addons-305823) DBG | I1019 16:21:44.099118  279015 common.go:157] Creating ssh key: /home/jenkins/minikube-integration/21683-274250/.minikube/machines/addons-305823/id_rsa...
	I1019 16:21:44.293701  278987 main.go:143] libmachine: (addons-305823) DBG | I1019 16:21:44.293561  279015 common.go:163] Creating raw disk image: /home/jenkins/minikube-integration/21683-274250/.minikube/machines/addons-305823/addons-305823.rawdisk...
	I1019 16:21:44.293734  278987 main.go:143] libmachine: (addons-305823) DBG | Writing magic tar header
	I1019 16:21:44.293748  278987 main.go:143] libmachine: (addons-305823) DBG | Writing SSH key tar header
	I1019 16:21:44.293756  278987 main.go:143] libmachine: (addons-305823) DBG | I1019 16:21:44.293719  279015 common.go:177] Fixing permissions on /home/jenkins/minikube-integration/21683-274250/.minikube/machines/addons-305823 ...
	I1019 16:21:44.293854  278987 main.go:143] libmachine: (addons-305823) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21683-274250/.minikube/machines/addons-305823
	I1019 16:21:44.293874  278987 main.go:143] libmachine: (addons-305823) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21683-274250/.minikube/machines
	I1019 16:21:44.293884  278987 main.go:143] libmachine: (addons-305823) setting executable bit set on /home/jenkins/minikube-integration/21683-274250/.minikube/machines/addons-305823 (perms=drwx------)
	I1019 16:21:44.293892  278987 main.go:143] libmachine: (addons-305823) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21683-274250/.minikube
	I1019 16:21:44.293926  278987 main.go:143] libmachine: (addons-305823) setting executable bit set on /home/jenkins/minikube-integration/21683-274250/.minikube/machines (perms=drwxr-xr-x)
	I1019 16:21:44.293961  278987 main.go:143] libmachine: (addons-305823) DBG | checking permissions on dir: /home/jenkins/minikube-integration/21683-274250
	I1019 16:21:44.293995  278987 main.go:143] libmachine: (addons-305823) setting executable bit set on /home/jenkins/minikube-integration/21683-274250/.minikube (perms=drwxr-xr-x)
	I1019 16:21:44.294026  278987 main.go:143] libmachine: (addons-305823) DBG | checking permissions on dir: /home/jenkins/minikube-integration
	I1019 16:21:44.294059  278987 main.go:143] libmachine: (addons-305823) setting executable bit set on /home/jenkins/minikube-integration/21683-274250 (perms=drwxrwxr-x)
	I1019 16:21:44.294071  278987 main.go:143] libmachine: (addons-305823) DBG | checking permissions on dir: /home/jenkins
	I1019 16:21:44.294085  278987 main.go:143] libmachine: (addons-305823) DBG | checking permissions on dir: /home
	I1019 16:21:44.294096  278987 main.go:143] libmachine: (addons-305823) DBG | skipping /home - not owner
	I1019 16:21:44.294111  278987 main.go:143] libmachine: (addons-305823) setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1019 16:21:44.294124  278987 main.go:143] libmachine: (addons-305823) setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1019 16:21:44.294135  278987 main.go:143] libmachine: (addons-305823) defining domain...
	I1019 16:21:44.295263  278987 main.go:143] libmachine: (addons-305823) defining domain using XML: 
	I1019 16:21:44.295285  278987 main.go:143] libmachine: (addons-305823) <domain type='kvm'>
	I1019 16:21:44.295297  278987 main.go:143] libmachine: (addons-305823)   <name>addons-305823</name>
	I1019 16:21:44.295311  278987 main.go:143] libmachine: (addons-305823)   <memory unit='MiB'>4096</memory>
	I1019 16:21:44.295320  278987 main.go:143] libmachine: (addons-305823)   <vcpu>2</vcpu>
	I1019 16:21:44.295327  278987 main.go:143] libmachine: (addons-305823)   <features>
	I1019 16:21:44.295335  278987 main.go:143] libmachine: (addons-305823)     <acpi/>
	I1019 16:21:44.295342  278987 main.go:143] libmachine: (addons-305823)     <apic/>
	I1019 16:21:44.295349  278987 main.go:143] libmachine: (addons-305823)     <pae/>
	I1019 16:21:44.295358  278987 main.go:143] libmachine: (addons-305823)   </features>
	I1019 16:21:44.295367  278987 main.go:143] libmachine: (addons-305823)   <cpu mode='host-passthrough'>
	I1019 16:21:44.295391  278987 main.go:143] libmachine: (addons-305823)   </cpu>
	I1019 16:21:44.295401  278987 main.go:143] libmachine: (addons-305823)   <os>
	I1019 16:21:44.295409  278987 main.go:143] libmachine: (addons-305823)     <type>hvm</type>
	I1019 16:21:44.295416  278987 main.go:143] libmachine: (addons-305823)     <boot dev='cdrom'/>
	I1019 16:21:44.295425  278987 main.go:143] libmachine: (addons-305823)     <boot dev='hd'/>
	I1019 16:21:44.295433  278987 main.go:143] libmachine: (addons-305823)     <bootmenu enable='no'/>
	I1019 16:21:44.295440  278987 main.go:143] libmachine: (addons-305823)   </os>
	I1019 16:21:44.295447  278987 main.go:143] libmachine: (addons-305823)   <devices>
	I1019 16:21:44.295455  278987 main.go:143] libmachine: (addons-305823)     <disk type='file' device='cdrom'>
	I1019 16:21:44.295486  278987 main.go:143] libmachine: (addons-305823)       <source file='/home/jenkins/minikube-integration/21683-274250/.minikube/machines/addons-305823/boot2docker.iso'/>
	I1019 16:21:44.295511  278987 main.go:143] libmachine: (addons-305823)       <target dev='hdc' bus='scsi'/>
	I1019 16:21:44.295528  278987 main.go:143] libmachine: (addons-305823)       <readonly/>
	I1019 16:21:44.295537  278987 main.go:143] libmachine: (addons-305823)     </disk>
	I1019 16:21:44.295546  278987 main.go:143] libmachine: (addons-305823)     <disk type='file' device='disk'>
	I1019 16:21:44.295560  278987 main.go:143] libmachine: (addons-305823)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1019 16:21:44.295571  278987 main.go:143] libmachine: (addons-305823)       <source file='/home/jenkins/minikube-integration/21683-274250/.minikube/machines/addons-305823/addons-305823.rawdisk'/>
	I1019 16:21:44.295577  278987 main.go:143] libmachine: (addons-305823)       <target dev='hda' bus='virtio'/>
	I1019 16:21:44.295584  278987 main.go:143] libmachine: (addons-305823)     </disk>
	I1019 16:21:44.295599  278987 main.go:143] libmachine: (addons-305823)     <interface type='network'>
	I1019 16:21:44.295606  278987 main.go:143] libmachine: (addons-305823)       <source network='mk-addons-305823'/>
	I1019 16:21:44.295611  278987 main.go:143] libmachine: (addons-305823)       <model type='virtio'/>
	I1019 16:21:44.295618  278987 main.go:143] libmachine: (addons-305823)     </interface>
	I1019 16:21:44.295623  278987 main.go:143] libmachine: (addons-305823)     <interface type='network'>
	I1019 16:21:44.295630  278987 main.go:143] libmachine: (addons-305823)       <source network='default'/>
	I1019 16:21:44.295635  278987 main.go:143] libmachine: (addons-305823)       <model type='virtio'/>
	I1019 16:21:44.295638  278987 main.go:143] libmachine: (addons-305823)     </interface>
	I1019 16:21:44.295655  278987 main.go:143] libmachine: (addons-305823)     <serial type='pty'>
	I1019 16:21:44.295672  278987 main.go:143] libmachine: (addons-305823)       <target port='0'/>
	I1019 16:21:44.295683  278987 main.go:143] libmachine: (addons-305823)     </serial>
	I1019 16:21:44.295695  278987 main.go:143] libmachine: (addons-305823)     <console type='pty'>
	I1019 16:21:44.295707  278987 main.go:143] libmachine: (addons-305823)       <target type='serial' port='0'/>
	I1019 16:21:44.295716  278987 main.go:143] libmachine: (addons-305823)     </console>
	I1019 16:21:44.295725  278987 main.go:143] libmachine: (addons-305823)     <rng model='virtio'>
	I1019 16:21:44.295736  278987 main.go:143] libmachine: (addons-305823)       <backend model='random'>/dev/random</backend>
	I1019 16:21:44.295752  278987 main.go:143] libmachine: (addons-305823)     </rng>
	I1019 16:21:44.295770  278987 main.go:143] libmachine: (addons-305823)   </devices>
	I1019 16:21:44.295787  278987 main.go:143] libmachine: (addons-305823) </domain>
	I1019 16:21:44.295798  278987 main.go:143] libmachine: (addons-305823) 
	I1019 16:21:44.300710  278987 main.go:143] libmachine: (addons-305823) DBG | domain addons-305823 has defined MAC address 52:54:00:4c:a9:4e in network default
	I1019 16:21:44.301500  278987 main.go:143] libmachine: (addons-305823) starting domain...
	I1019 16:21:44.301519  278987 main.go:143] libmachine: (addons-305823) DBG | domain addons-305823 has defined MAC address 52:54:00:48:d4:0b in network mk-addons-305823
	I1019 16:21:44.301525  278987 main.go:143] libmachine: (addons-305823) ensuring networks are active...
	I1019 16:21:44.302204  278987 main.go:143] libmachine: (addons-305823) Ensuring network default is active
	I1019 16:21:44.302549  278987 main.go:143] libmachine: (addons-305823) Ensuring network mk-addons-305823 is active
	I1019 16:21:44.303141  278987 main.go:143] libmachine: (addons-305823) getting domain XML...
	I1019 16:21:44.304155  278987 main.go:143] libmachine: (addons-305823) DBG | starting domain XML:
	I1019 16:21:44.304179  278987 main.go:143] libmachine: (addons-305823) DBG | <domain type='kvm'>
	I1019 16:21:44.304188  278987 main.go:143] libmachine: (addons-305823) DBG |   <name>addons-305823</name>
	I1019 16:21:44.304201  278987 main.go:143] libmachine: (addons-305823) DBG |   <uuid>f263ad2e-c691-4931-9046-9032f0718877</uuid>
	I1019 16:21:44.304228  278987 main.go:143] libmachine: (addons-305823) DBG |   <memory unit='KiB'>4194304</memory>
	I1019 16:21:44.304253  278987 main.go:143] libmachine: (addons-305823) DBG |   <currentMemory unit='KiB'>4194304</currentMemory>
	I1019 16:21:44.304268  278987 main.go:143] libmachine: (addons-305823) DBG |   <vcpu placement='static'>2</vcpu>
	I1019 16:21:44.304280  278987 main.go:143] libmachine: (addons-305823) DBG |   <os>
	I1019 16:21:44.304307  278987 main.go:143] libmachine: (addons-305823) DBG |     <type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
	I1019 16:21:44.304329  278987 main.go:143] libmachine: (addons-305823) DBG |     <boot dev='cdrom'/>
	I1019 16:21:44.304341  278987 main.go:143] libmachine: (addons-305823) DBG |     <boot dev='hd'/>
	I1019 16:21:44.304350  278987 main.go:143] libmachine: (addons-305823) DBG |     <bootmenu enable='no'/>
	I1019 16:21:44.304362  278987 main.go:143] libmachine: (addons-305823) DBG |   </os>
	I1019 16:21:44.304373  278987 main.go:143] libmachine: (addons-305823) DBG |   <features>
	I1019 16:21:44.304382  278987 main.go:143] libmachine: (addons-305823) DBG |     <acpi/>
	I1019 16:21:44.304393  278987 main.go:143] libmachine: (addons-305823) DBG |     <apic/>
	I1019 16:21:44.304402  278987 main.go:143] libmachine: (addons-305823) DBG |     <pae/>
	I1019 16:21:44.304416  278987 main.go:143] libmachine: (addons-305823) DBG |   </features>
	I1019 16:21:44.304429  278987 main.go:143] libmachine: (addons-305823) DBG |   <cpu mode='host-passthrough' check='none' migratable='on'/>
	I1019 16:21:44.304438  278987 main.go:143] libmachine: (addons-305823) DBG |   <clock offset='utc'/>
	I1019 16:21:44.304452  278987 main.go:143] libmachine: (addons-305823) DBG |   <on_poweroff>destroy</on_poweroff>
	I1019 16:21:44.304463  278987 main.go:143] libmachine: (addons-305823) DBG |   <on_reboot>restart</on_reboot>
	I1019 16:21:44.304488  278987 main.go:143] libmachine: (addons-305823) DBG |   <on_crash>destroy</on_crash>
	I1019 16:21:44.304503  278987 main.go:143] libmachine: (addons-305823) DBG |   <devices>
	I1019 16:21:44.304527  278987 main.go:143] libmachine: (addons-305823) DBG |     <emulator>/usr/bin/qemu-system-x86_64</emulator>
	I1019 16:21:44.304546  278987 main.go:143] libmachine: (addons-305823) DBG |     <disk type='file' device='cdrom'>
	I1019 16:21:44.304559  278987 main.go:143] libmachine: (addons-305823) DBG |       <driver name='qemu' type='raw'/>
	I1019 16:21:44.304573  278987 main.go:143] libmachine: (addons-305823) DBG |       <source file='/home/jenkins/minikube-integration/21683-274250/.minikube/machines/addons-305823/boot2docker.iso'/>
	I1019 16:21:44.304584  278987 main.go:143] libmachine: (addons-305823) DBG |       <target dev='hdc' bus='scsi'/>
	I1019 16:21:44.304594  278987 main.go:143] libmachine: (addons-305823) DBG |       <readonly/>
	I1019 16:21:44.304607  278987 main.go:143] libmachine: (addons-305823) DBG |       <address type='drive' controller='0' bus='0' target='0' unit='2'/>
	I1019 16:21:44.304619  278987 main.go:143] libmachine: (addons-305823) DBG |     </disk>
	I1019 16:21:44.304635  278987 main.go:143] libmachine: (addons-305823) DBG |     <disk type='file' device='disk'>
	I1019 16:21:44.304649  278987 main.go:143] libmachine: (addons-305823) DBG |       <driver name='qemu' type='raw' io='threads'/>
	I1019 16:21:44.304659  278987 main.go:143] libmachine: (addons-305823) DBG |       <source file='/home/jenkins/minikube-integration/21683-274250/.minikube/machines/addons-305823/addons-305823.rawdisk'/>
	I1019 16:21:44.304670  278987 main.go:143] libmachine: (addons-305823) DBG |       <target dev='hda' bus='virtio'/>
	I1019 16:21:44.304686  278987 main.go:143] libmachine: (addons-305823) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
	I1019 16:21:44.304697  278987 main.go:143] libmachine: (addons-305823) DBG |     </disk>
	I1019 16:21:44.304707  278987 main.go:143] libmachine: (addons-305823) DBG |     <controller type='usb' index='0' model='piix3-uhci'>
	I1019 16:21:44.304720  278987 main.go:143] libmachine: (addons-305823) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
	I1019 16:21:44.304730  278987 main.go:143] libmachine: (addons-305823) DBG |     </controller>
	I1019 16:21:44.304740  278987 main.go:143] libmachine: (addons-305823) DBG |     <controller type='pci' index='0' model='pci-root'/>
	I1019 16:21:44.304746  278987 main.go:143] libmachine: (addons-305823) DBG |     <controller type='scsi' index='0' model='lsilogic'>
	I1019 16:21:44.304773  278987 main.go:143] libmachine: (addons-305823) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
	I1019 16:21:44.304785  278987 main.go:143] libmachine: (addons-305823) DBG |     </controller>
	I1019 16:21:44.304795  278987 main.go:143] libmachine: (addons-305823) DBG |     <interface type='network'>
	I1019 16:21:44.304803  278987 main.go:143] libmachine: (addons-305823) DBG |       <mac address='52:54:00:48:d4:0b'/>
	I1019 16:21:44.304812  278987 main.go:143] libmachine: (addons-305823) DBG |       <source network='mk-addons-305823'/>
	I1019 16:21:44.304819  278987 main.go:143] libmachine: (addons-305823) DBG |       <model type='virtio'/>
	I1019 16:21:44.304830  278987 main.go:143] libmachine: (addons-305823) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
	I1019 16:21:44.304835  278987 main.go:143] libmachine: (addons-305823) DBG |     </interface>
	I1019 16:21:44.304840  278987 main.go:143] libmachine: (addons-305823) DBG |     <interface type='network'>
	I1019 16:21:44.304847  278987 main.go:143] libmachine: (addons-305823) DBG |       <mac address='52:54:00:4c:a9:4e'/>
	I1019 16:21:44.304860  278987 main.go:143] libmachine: (addons-305823) DBG |       <source network='default'/>
	I1019 16:21:44.304875  278987 main.go:143] libmachine: (addons-305823) DBG |       <model type='virtio'/>
	I1019 16:21:44.304889  278987 main.go:143] libmachine: (addons-305823) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
	I1019 16:21:44.304898  278987 main.go:143] libmachine: (addons-305823) DBG |     </interface>
	I1019 16:21:44.304909  278987 main.go:143] libmachine: (addons-305823) DBG |     <serial type='pty'>
	I1019 16:21:44.304920  278987 main.go:143] libmachine: (addons-305823) DBG |       <target type='isa-serial' port='0'>
	I1019 16:21:44.304928  278987 main.go:143] libmachine: (addons-305823) DBG |         <model name='isa-serial'/>
	I1019 16:21:44.304937  278987 main.go:143] libmachine: (addons-305823) DBG |       </target>
	I1019 16:21:44.304953  278987 main.go:143] libmachine: (addons-305823) DBG |     </serial>
	I1019 16:21:44.304969  278987 main.go:143] libmachine: (addons-305823) DBG |     <console type='pty'>
	I1019 16:21:44.305001  278987 main.go:143] libmachine: (addons-305823) DBG |       <target type='serial' port='0'/>
	I1019 16:21:44.305019  278987 main.go:143] libmachine: (addons-305823) DBG |     </console>
	I1019 16:21:44.305034  278987 main.go:143] libmachine: (addons-305823) DBG |     <input type='mouse' bus='ps2'/>
	I1019 16:21:44.305050  278987 main.go:143] libmachine: (addons-305823) DBG |     <input type='keyboard' bus='ps2'/>
	I1019 16:21:44.305059  278987 main.go:143] libmachine: (addons-305823) DBG |     <audio id='1' type='none'/>
	I1019 16:21:44.305066  278987 main.go:143] libmachine: (addons-305823) DBG |     <memballoon model='virtio'>
	I1019 16:21:44.305080  278987 main.go:143] libmachine: (addons-305823) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
	I1019 16:21:44.305091  278987 main.go:143] libmachine: (addons-305823) DBG |     </memballoon>
	I1019 16:21:44.305101  278987 main.go:143] libmachine: (addons-305823) DBG |     <rng model='virtio'>
	I1019 16:21:44.305112  278987 main.go:143] libmachine: (addons-305823) DBG |       <backend model='random'>/dev/random</backend>
	I1019 16:21:44.305132  278987 main.go:143] libmachine: (addons-305823) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
	I1019 16:21:44.305149  278987 main.go:143] libmachine: (addons-305823) DBG |     </rng>
	I1019 16:21:44.305177  278987 main.go:143] libmachine: (addons-305823) DBG |   </devices>
	I1019 16:21:44.305191  278987 main.go:143] libmachine: (addons-305823) DBG | </domain>
	I1019 16:21:44.305211  278987 main.go:143] libmachine: (addons-305823) DBG | 
	I1019 16:21:45.548068  278987 main.go:143] libmachine: (addons-305823) waiting for domain to start...
	I1019 16:21:45.549466  278987 main.go:143] libmachine: (addons-305823) domain is now running
	I1019 16:21:45.549515  278987 main.go:143] libmachine: (addons-305823) waiting for IP...
	I1019 16:21:45.550241  278987 main.go:143] libmachine: (addons-305823) DBG | domain addons-305823 has defined MAC address 52:54:00:48:d4:0b in network mk-addons-305823
	I1019 16:21:45.550655  278987 main.go:143] libmachine: (addons-305823) DBG | no network interface addresses found for domain addons-305823 (source=lease)
	I1019 16:21:45.550682  278987 main.go:143] libmachine: (addons-305823) DBG | trying to list again with source=arp
	I1019 16:21:45.550893  278987 main.go:143] libmachine: (addons-305823) DBG | unable to find current IP address of domain addons-305823 in network mk-addons-305823 (interfaces detected: [])
	I1019 16:21:45.550964  278987 main.go:143] libmachine: (addons-305823) DBG | I1019 16:21:45.550908  279015 retry.go:31] will retry after 236.8799ms: waiting for domain to come up
	I1019 16:21:45.789481  278987 main.go:143] libmachine: (addons-305823) DBG | domain addons-305823 has defined MAC address 52:54:00:48:d4:0b in network mk-addons-305823
	I1019 16:21:45.790048  278987 main.go:143] libmachine: (addons-305823) DBG | no network interface addresses found for domain addons-305823 (source=lease)
	I1019 16:21:45.790076  278987 main.go:143] libmachine: (addons-305823) DBG | trying to list again with source=arp
	I1019 16:21:45.790324  278987 main.go:143] libmachine: (addons-305823) DBG | unable to find current IP address of domain addons-305823 in network mk-addons-305823 (interfaces detected: [])
	I1019 16:21:45.790380  278987 main.go:143] libmachine: (addons-305823) DBG | I1019 16:21:45.790319  279015 retry.go:31] will retry after 279.260154ms: waiting for domain to come up
	I1019 16:21:46.071078  278987 main.go:143] libmachine: (addons-305823) DBG | domain addons-305823 has defined MAC address 52:54:00:48:d4:0b in network mk-addons-305823
	I1019 16:21:46.071601  278987 main.go:143] libmachine: (addons-305823) DBG | no network interface addresses found for domain addons-305823 (source=lease)
	I1019 16:21:46.071630  278987 main.go:143] libmachine: (addons-305823) DBG | trying to list again with source=arp
	I1019 16:21:46.071858  278987 main.go:143] libmachine: (addons-305823) DBG | unable to find current IP address of domain addons-305823 in network mk-addons-305823 (interfaces detected: [])
	I1019 16:21:46.071885  278987 main.go:143] libmachine: (addons-305823) DBG | I1019 16:21:46.071833  279015 retry.go:31] will retry after 389.540191ms: waiting for domain to come up
	I1019 16:21:46.463421  278987 main.go:143] libmachine: (addons-305823) DBG | domain addons-305823 has defined MAC address 52:54:00:48:d4:0b in network mk-addons-305823
	I1019 16:21:46.463881  278987 main.go:143] libmachine: (addons-305823) DBG | no network interface addresses found for domain addons-305823 (source=lease)
	I1019 16:21:46.463906  278987 main.go:143] libmachine: (addons-305823) DBG | trying to list again with source=arp
	I1019 16:21:46.464151  278987 main.go:143] libmachine: (addons-305823) DBG | unable to find current IP address of domain addons-305823 in network mk-addons-305823 (interfaces detected: [])
	I1019 16:21:46.464212  278987 main.go:143] libmachine: (addons-305823) DBG | I1019 16:21:46.464139  279015 retry.go:31] will retry after 606.580588ms: waiting for domain to come up
	I1019 16:21:47.072201  278987 main.go:143] libmachine: (addons-305823) DBG | domain addons-305823 has defined MAC address 52:54:00:48:d4:0b in network mk-addons-305823
	I1019 16:21:47.072644  278987 main.go:143] libmachine: (addons-305823) DBG | no network interface addresses found for domain addons-305823 (source=lease)
	I1019 16:21:47.072671  278987 main.go:143] libmachine: (addons-305823) DBG | trying to list again with source=arp
	I1019 16:21:47.072947  278987 main.go:143] libmachine: (addons-305823) DBG | unable to find current IP address of domain addons-305823 in network mk-addons-305823 (interfaces detected: [])
	I1019 16:21:47.073002  278987 main.go:143] libmachine: (addons-305823) DBG | I1019 16:21:47.072943  279015 retry.go:31] will retry after 494.97839ms: waiting for domain to come up
	I1019 16:21:47.569678  278987 main.go:143] libmachine: (addons-305823) DBG | domain addons-305823 has defined MAC address 52:54:00:48:d4:0b in network mk-addons-305823
	I1019 16:21:47.570220  278987 main.go:143] libmachine: (addons-305823) DBG | no network interface addresses found for domain addons-305823 (source=lease)
	I1019 16:21:47.570250  278987 main.go:143] libmachine: (addons-305823) DBG | trying to list again with source=arp
	I1019 16:21:47.570496  278987 main.go:143] libmachine: (addons-305823) DBG | unable to find current IP address of domain addons-305823 in network mk-addons-305823 (interfaces detected: [])
	I1019 16:21:47.570527  278987 main.go:143] libmachine: (addons-305823) DBG | I1019 16:21:47.570482  279015 retry.go:31] will retry after 638.116314ms: waiting for domain to come up
	I1019 16:21:48.210093  278987 main.go:143] libmachine: (addons-305823) DBG | domain addons-305823 has defined MAC address 52:54:00:48:d4:0b in network mk-addons-305823
	I1019 16:21:48.210519  278987 main.go:143] libmachine: (addons-305823) DBG | no network interface addresses found for domain addons-305823 (source=lease)
	I1019 16:21:48.210546  278987 main.go:143] libmachine: (addons-305823) DBG | trying to list again with source=arp
	I1019 16:21:48.210782  278987 main.go:143] libmachine: (addons-305823) DBG | unable to find current IP address of domain addons-305823 in network mk-addons-305823 (interfaces detected: [])
	I1019 16:21:48.210805  278987 main.go:143] libmachine: (addons-305823) DBG | I1019 16:21:48.210759  279015 retry.go:31] will retry after 1.086104824s: waiting for domain to come up
	I1019 16:21:49.298435  278987 main.go:143] libmachine: (addons-305823) DBG | domain addons-305823 has defined MAC address 52:54:00:48:d4:0b in network mk-addons-305823
	I1019 16:21:49.299061  278987 main.go:143] libmachine: (addons-305823) DBG | no network interface addresses found for domain addons-305823 (source=lease)
	I1019 16:21:49.299083  278987 main.go:143] libmachine: (addons-305823) DBG | trying to list again with source=arp
	I1019 16:21:49.299341  278987 main.go:143] libmachine: (addons-305823) DBG | unable to find current IP address of domain addons-305823 in network mk-addons-305823 (interfaces detected: [])
	I1019 16:21:49.299422  278987 main.go:143] libmachine: (addons-305823) DBG | I1019 16:21:49.299344  279015 retry.go:31] will retry after 1.12474459s: waiting for domain to come up
	I1019 16:21:50.425769  278987 main.go:143] libmachine: (addons-305823) DBG | domain addons-305823 has defined MAC address 52:54:00:48:d4:0b in network mk-addons-305823
	I1019 16:21:50.426270  278987 main.go:143] libmachine: (addons-305823) DBG | no network interface addresses found for domain addons-305823 (source=lease)
	I1019 16:21:50.426302  278987 main.go:143] libmachine: (addons-305823) DBG | trying to list again with source=arp
	I1019 16:21:50.426589  278987 main.go:143] libmachine: (addons-305823) DBG | unable to find current IP address of domain addons-305823 in network mk-addons-305823 (interfaces detected: [])
	I1019 16:21:50.426620  278987 main.go:143] libmachine: (addons-305823) DBG | I1019 16:21:50.426583  279015 retry.go:31] will retry after 1.539480615s: waiting for domain to come up
	I1019 16:21:51.967415  278987 main.go:143] libmachine: (addons-305823) DBG | domain addons-305823 has defined MAC address 52:54:00:48:d4:0b in network mk-addons-305823
	I1019 16:21:51.967849  278987 main.go:143] libmachine: (addons-305823) DBG | no network interface addresses found for domain addons-305823 (source=lease)
	I1019 16:21:51.967876  278987 main.go:143] libmachine: (addons-305823) DBG | trying to list again with source=arp
	I1019 16:21:51.968178  278987 main.go:143] libmachine: (addons-305823) DBG | unable to find current IP address of domain addons-305823 in network mk-addons-305823 (interfaces detected: [])
	I1019 16:21:51.968231  278987 main.go:143] libmachine: (addons-305823) DBG | I1019 16:21:51.968167  279015 retry.go:31] will retry after 1.955943844s: waiting for domain to come up
	I1019 16:21:53.925703  278987 main.go:143] libmachine: (addons-305823) DBG | domain addons-305823 has defined MAC address 52:54:00:48:d4:0b in network mk-addons-305823
	I1019 16:21:53.926199  278987 main.go:143] libmachine: (addons-305823) DBG | no network interface addresses found for domain addons-305823 (source=lease)
	I1019 16:21:53.926218  278987 main.go:143] libmachine: (addons-305823) DBG | trying to list again with source=arp
	I1019 16:21:53.926598  278987 main.go:143] libmachine: (addons-305823) DBG | unable to find current IP address of domain addons-305823 in network mk-addons-305823 (interfaces detected: [])
	I1019 16:21:53.926633  278987 main.go:143] libmachine: (addons-305823) DBG | I1019 16:21:53.926545  279015 retry.go:31] will retry after 2.150131908s: waiting for domain to come up
	I1019 16:21:56.080083  278987 main.go:143] libmachine: (addons-305823) DBG | domain addons-305823 has defined MAC address 52:54:00:48:d4:0b in network mk-addons-305823
	I1019 16:21:56.080502  278987 main.go:143] libmachine: (addons-305823) DBG | no network interface addresses found for domain addons-305823 (source=lease)
	I1019 16:21:56.080528  278987 main.go:143] libmachine: (addons-305823) DBG | trying to list again with source=arp
	I1019 16:21:56.080798  278987 main.go:143] libmachine: (addons-305823) DBG | unable to find current IP address of domain addons-305823 in network mk-addons-305823 (interfaces detected: [])
	I1019 16:21:56.080829  278987 main.go:143] libmachine: (addons-305823) DBG | I1019 16:21:56.080757  279015 retry.go:31] will retry after 3.335009076s: waiting for domain to come up
	I1019 16:21:59.417623  278987 main.go:143] libmachine: (addons-305823) DBG | domain addons-305823 has defined MAC address 52:54:00:48:d4:0b in network mk-addons-305823
	I1019 16:21:59.418124  278987 main.go:143] libmachine: (addons-305823) found domain IP: 192.168.39.11
	I1019 16:21:59.418155  278987 main.go:143] libmachine: (addons-305823) DBG | domain addons-305823 has current primary IP address 192.168.39.11 and MAC address 52:54:00:48:d4:0b in network mk-addons-305823
	I1019 16:21:59.418164  278987 main.go:143] libmachine: (addons-305823) reserving static IP address...
	I1019 16:21:59.418564  278987 main.go:143] libmachine: (addons-305823) DBG | unable to find host DHCP lease matching {name: "addons-305823", mac: "52:54:00:48:d4:0b", ip: "192.168.39.11"} in network mk-addons-305823
	I1019 16:21:59.587959  278987 main.go:143] libmachine: (addons-305823) DBG | Getting to WaitForSSH function...
	I1019 16:21:59.588010  278987 main.go:143] libmachine: (addons-305823) reserved static IP address 192.168.39.11 for domain addons-305823
	I1019 16:21:59.588026  278987 main.go:143] libmachine: (addons-305823) waiting for SSH...
	I1019 16:21:59.591042  278987 main.go:143] libmachine: (addons-305823) DBG | domain addons-305823 has defined MAC address 52:54:00:48:d4:0b in network mk-addons-305823
	I1019 16:21:59.591458  278987 main.go:143] libmachine: (addons-305823) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:d4:0b", ip: ""} in network mk-addons-305823: {Iface:virbr1 ExpiryTime:2025-10-19 17:21:58 +0000 UTC Type:0 Mac:52:54:00:48:d4:0b Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:minikube Clientid:01:52:54:00:48:d4:0b}
	I1019 16:21:59.591484  278987 main.go:143] libmachine: (addons-305823) DBG | domain addons-305823 has defined IP address 192.168.39.11 and MAC address 52:54:00:48:d4:0b in network mk-addons-305823
	I1019 16:21:59.591674  278987 main.go:143] libmachine: (addons-305823) DBG | Using SSH client type: external
	I1019 16:21:59.591703  278987 main.go:143] libmachine: (addons-305823) DBG | Using SSH private key: /home/jenkins/minikube-integration/21683-274250/.minikube/machines/addons-305823/id_rsa (-rw-------)
	I1019 16:21:59.591738  278987 main.go:143] libmachine: (addons-305823) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.11 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/21683-274250/.minikube/machines/addons-305823/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1019 16:21:59.591757  278987 main.go:143] libmachine: (addons-305823) DBG | About to run SSH command:
	I1019 16:21:59.591792  278987 main.go:143] libmachine: (addons-305823) DBG | exit 0
	I1019 16:21:59.728425  278987 main.go:143] libmachine: (addons-305823) DBG | SSH cmd err, output: <nil>: 
	I1019 16:21:59.728549  278987 main.go:143] libmachine: (addons-305823) domain creation complete
	I1019 16:21:59.728956  278987 main.go:143] libmachine: (addons-305823) Calling .GetConfigRaw
	I1019 16:21:59.729610  278987 main.go:143] libmachine: (addons-305823) Calling .DriverName
	I1019 16:21:59.729838  278987 main.go:143] libmachine: (addons-305823) Calling .DriverName
	I1019 16:21:59.730043  278987 main.go:143] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1019 16:21:59.730068  278987 main.go:143] libmachine: (addons-305823) Calling .GetState
	I1019 16:21:59.731405  278987 main.go:143] libmachine: Detecting operating system of created instance...
	I1019 16:21:59.731422  278987 main.go:143] libmachine: Waiting for SSH to be available...
	I1019 16:21:59.731429  278987 main.go:143] libmachine: Getting to WaitForSSH function...
	I1019 16:21:59.731438  278987 main.go:143] libmachine: (addons-305823) Calling .GetSSHHostname
	I1019 16:21:59.733938  278987 main.go:143] libmachine: (addons-305823) DBG | domain addons-305823 has defined MAC address 52:54:00:48:d4:0b in network mk-addons-305823
	I1019 16:21:59.734340  278987 main.go:143] libmachine: (addons-305823) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:d4:0b", ip: ""} in network mk-addons-305823: {Iface:virbr1 ExpiryTime:2025-10-19 17:21:58 +0000 UTC Type:0 Mac:52:54:00:48:d4:0b Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:addons-305823 Clientid:01:52:54:00:48:d4:0b}
	I1019 16:21:59.734367  278987 main.go:143] libmachine: (addons-305823) DBG | domain addons-305823 has defined IP address 192.168.39.11 and MAC address 52:54:00:48:d4:0b in network mk-addons-305823
	I1019 16:21:59.734517  278987 main.go:143] libmachine: (addons-305823) Calling .GetSSHPort
	I1019 16:21:59.734707  278987 main.go:143] libmachine: (addons-305823) Calling .GetSSHKeyPath
	I1019 16:21:59.734853  278987 main.go:143] libmachine: (addons-305823) Calling .GetSSHKeyPath
	I1019 16:21:59.735006  278987 main.go:143] libmachine: (addons-305823) Calling .GetSSHUsername
	I1019 16:21:59.735139  278987 main.go:143] libmachine: Using SSH client type: native
	I1019 16:21:59.735350  278987 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.11 22 <nil> <nil>}
	I1019 16:21:59.735360  278987 main.go:143] libmachine: About to run SSH command:
	exit 0
	I1019 16:21:59.842972  278987 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1019 16:21:59.843012  278987 main.go:143] libmachine: Detecting the provisioner...
	I1019 16:21:59.843024  278987 main.go:143] libmachine: (addons-305823) Calling .GetSSHHostname
	I1019 16:21:59.846303  278987 main.go:143] libmachine: (addons-305823) DBG | domain addons-305823 has defined MAC address 52:54:00:48:d4:0b in network mk-addons-305823
	I1019 16:21:59.846757  278987 main.go:143] libmachine: (addons-305823) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:d4:0b", ip: ""} in network mk-addons-305823: {Iface:virbr1 ExpiryTime:2025-10-19 17:21:58 +0000 UTC Type:0 Mac:52:54:00:48:d4:0b Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:addons-305823 Clientid:01:52:54:00:48:d4:0b}
	I1019 16:21:59.846782  278987 main.go:143] libmachine: (addons-305823) DBG | domain addons-305823 has defined IP address 192.168.39.11 and MAC address 52:54:00:48:d4:0b in network mk-addons-305823
	I1019 16:21:59.846968  278987 main.go:143] libmachine: (addons-305823) Calling .GetSSHPort
	I1019 16:21:59.847182  278987 main.go:143] libmachine: (addons-305823) Calling .GetSSHKeyPath
	I1019 16:21:59.847358  278987 main.go:143] libmachine: (addons-305823) Calling .GetSSHKeyPath
	I1019 16:21:59.847519  278987 main.go:143] libmachine: (addons-305823) Calling .GetSSHUsername
	I1019 16:21:59.847690  278987 main.go:143] libmachine: Using SSH client type: native
	I1019 16:21:59.847891  278987 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.11 22 <nil> <nil>}
	I1019 16:21:59.847901  278987 main.go:143] libmachine: About to run SSH command:
	cat /etc/os-release
	I1019 16:21:59.955592  278987 main.go:143] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2025.02-dirty
	ID=buildroot
	VERSION_ID=2025.02
	PRETTY_NAME="Buildroot 2025.02"
	
	I1019 16:21:59.955661  278987 main.go:143] libmachine: found compatible host: buildroot
	I1019 16:21:59.955674  278987 main.go:143] libmachine: Provisioning with buildroot...
	I1019 16:21:59.955686  278987 main.go:143] libmachine: (addons-305823) Calling .GetMachineName
	I1019 16:21:59.955955  278987 buildroot.go:166] provisioning hostname "addons-305823"
	I1019 16:21:59.955975  278987 main.go:143] libmachine: (addons-305823) Calling .GetMachineName
	I1019 16:21:59.956215  278987 main.go:143] libmachine: (addons-305823) Calling .GetSSHHostname
	I1019 16:21:59.959338  278987 main.go:143] libmachine: (addons-305823) DBG | domain addons-305823 has defined MAC address 52:54:00:48:d4:0b in network mk-addons-305823
	I1019 16:21:59.959745  278987 main.go:143] libmachine: (addons-305823) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:d4:0b", ip: ""} in network mk-addons-305823: {Iface:virbr1 ExpiryTime:2025-10-19 17:21:58 +0000 UTC Type:0 Mac:52:54:00:48:d4:0b Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:addons-305823 Clientid:01:52:54:00:48:d4:0b}
	I1019 16:21:59.959772  278987 main.go:143] libmachine: (addons-305823) DBG | domain addons-305823 has defined IP address 192.168.39.11 and MAC address 52:54:00:48:d4:0b in network mk-addons-305823
	I1019 16:21:59.959923  278987 main.go:143] libmachine: (addons-305823) Calling .GetSSHPort
	I1019 16:21:59.960132  278987 main.go:143] libmachine: (addons-305823) Calling .GetSSHKeyPath
	I1019 16:21:59.960333  278987 main.go:143] libmachine: (addons-305823) Calling .GetSSHKeyPath
	I1019 16:21:59.960490  278987 main.go:143] libmachine: (addons-305823) Calling .GetSSHUsername
	I1019 16:21:59.960665  278987 main.go:143] libmachine: Using SSH client type: native
	I1019 16:21:59.960888  278987 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.11 22 <nil> <nil>}
	I1019 16:21:59.960899  278987 main.go:143] libmachine: About to run SSH command:
	sudo hostname addons-305823 && echo "addons-305823" | sudo tee /etc/hostname
	I1019 16:22:00.082745  278987 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-305823
	
	I1019 16:22:00.082782  278987 main.go:143] libmachine: (addons-305823) Calling .GetSSHHostname
	I1019 16:22:00.085956  278987 main.go:143] libmachine: (addons-305823) DBG | domain addons-305823 has defined MAC address 52:54:00:48:d4:0b in network mk-addons-305823
	I1019 16:22:00.086361  278987 main.go:143] libmachine: (addons-305823) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:d4:0b", ip: ""} in network mk-addons-305823: {Iface:virbr1 ExpiryTime:2025-10-19 17:21:58 +0000 UTC Type:0 Mac:52:54:00:48:d4:0b Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:addons-305823 Clientid:01:52:54:00:48:d4:0b}
	I1019 16:22:00.086401  278987 main.go:143] libmachine: (addons-305823) DBG | domain addons-305823 has defined IP address 192.168.39.11 and MAC address 52:54:00:48:d4:0b in network mk-addons-305823
	I1019 16:22:00.086579  278987 main.go:143] libmachine: (addons-305823) Calling .GetSSHPort
	I1019 16:22:00.086765  278987 main.go:143] libmachine: (addons-305823) Calling .GetSSHKeyPath
	I1019 16:22:00.086926  278987 main.go:143] libmachine: (addons-305823) Calling .GetSSHKeyPath
	I1019 16:22:00.087128  278987 main.go:143] libmachine: (addons-305823) Calling .GetSSHUsername
	I1019 16:22:00.087321  278987 main.go:143] libmachine: Using SSH client type: native
	I1019 16:22:00.087533  278987 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.11 22 <nil> <nil>}
	I1019 16:22:00.087548  278987 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-305823' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-305823/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-305823' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1019 16:22:00.204460  278987 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1019 16:22:00.204493  278987 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21683-274250/.minikube CaCertPath:/home/jenkins/minikube-integration/21683-274250/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21683-274250/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21683-274250/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21683-274250/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21683-274250/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21683-274250/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21683-274250/.minikube}
	I1019 16:22:00.204541  278987 buildroot.go:174] setting up certificates
	I1019 16:22:00.204554  278987 provision.go:84] configureAuth start
	I1019 16:22:00.204567  278987 main.go:143] libmachine: (addons-305823) Calling .GetMachineName
	I1019 16:22:00.204869  278987 main.go:143] libmachine: (addons-305823) Calling .GetIP
	I1019 16:22:00.207762  278987 main.go:143] libmachine: (addons-305823) DBG | domain addons-305823 has defined MAC address 52:54:00:48:d4:0b in network mk-addons-305823
	I1019 16:22:00.208116  278987 main.go:143] libmachine: (addons-305823) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:d4:0b", ip: ""} in network mk-addons-305823: {Iface:virbr1 ExpiryTime:2025-10-19 17:21:58 +0000 UTC Type:0 Mac:52:54:00:48:d4:0b Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:addons-305823 Clientid:01:52:54:00:48:d4:0b}
	I1019 16:22:00.208148  278987 main.go:143] libmachine: (addons-305823) DBG | domain addons-305823 has defined IP address 192.168.39.11 and MAC address 52:54:00:48:d4:0b in network mk-addons-305823
	I1019 16:22:00.208319  278987 main.go:143] libmachine: (addons-305823) Calling .GetSSHHostname
	I1019 16:22:00.210778  278987 main.go:143] libmachine: (addons-305823) DBG | domain addons-305823 has defined MAC address 52:54:00:48:d4:0b in network mk-addons-305823
	I1019 16:22:00.211234  278987 main.go:143] libmachine: (addons-305823) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:d4:0b", ip: ""} in network mk-addons-305823: {Iface:virbr1 ExpiryTime:2025-10-19 17:21:58 +0000 UTC Type:0 Mac:52:54:00:48:d4:0b Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:addons-305823 Clientid:01:52:54:00:48:d4:0b}
	I1019 16:22:00.211265  278987 main.go:143] libmachine: (addons-305823) DBG | domain addons-305823 has defined IP address 192.168.39.11 and MAC address 52:54:00:48:d4:0b in network mk-addons-305823
	I1019 16:22:00.211435  278987 provision.go:143] copyHostCerts
	I1019 16:22:00.211541  278987 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-274250/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21683-274250/.minikube/ca.pem (1082 bytes)
	I1019 16:22:00.211718  278987 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-274250/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21683-274250/.minikube/cert.pem (1123 bytes)
	I1019 16:22:00.211795  278987 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-274250/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21683-274250/.minikube/key.pem (1675 bytes)
	I1019 16:22:00.211853  278987 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21683-274250/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21683-274250/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21683-274250/.minikube/certs/ca-key.pem org=jenkins.addons-305823 san=[127.0.0.1 192.168.39.11 addons-305823 localhost minikube]
	I1019 16:22:00.437331  278987 provision.go:177] copyRemoteCerts
	I1019 16:22:00.437395  278987 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1019 16:22:00.437422  278987 main.go:143] libmachine: (addons-305823) Calling .GetSSHHostname
	I1019 16:22:00.440328  278987 main.go:143] libmachine: (addons-305823) DBG | domain addons-305823 has defined MAC address 52:54:00:48:d4:0b in network mk-addons-305823
	I1019 16:22:00.440669  278987 main.go:143] libmachine: (addons-305823) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:d4:0b", ip: ""} in network mk-addons-305823: {Iface:virbr1 ExpiryTime:2025-10-19 17:21:58 +0000 UTC Type:0 Mac:52:54:00:48:d4:0b Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:addons-305823 Clientid:01:52:54:00:48:d4:0b}
	I1019 16:22:00.440697  278987 main.go:143] libmachine: (addons-305823) DBG | domain addons-305823 has defined IP address 192.168.39.11 and MAC address 52:54:00:48:d4:0b in network mk-addons-305823
	I1019 16:22:00.440873  278987 main.go:143] libmachine: (addons-305823) Calling .GetSSHPort
	I1019 16:22:00.441075  278987 main.go:143] libmachine: (addons-305823) Calling .GetSSHKeyPath
	I1019 16:22:00.441237  278987 main.go:143] libmachine: (addons-305823) Calling .GetSSHUsername
	I1019 16:22:00.441341  278987 sshutil.go:53] new ssh client: &{IP:192.168.39.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21683-274250/.minikube/machines/addons-305823/id_rsa Username:docker}
	I1019 16:22:00.525888  278987 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-274250/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1019 16:22:00.553622  278987 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-274250/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1019 16:22:00.587934  278987 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-274250/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1019 16:22:00.620950  278987 provision.go:87] duration metric: took 416.379675ms to configureAuth
	I1019 16:22:00.621003  278987 buildroot.go:189] setting minikube options for container-runtime
	I1019 16:22:00.621214  278987 config.go:182] Loaded profile config "addons-305823": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 16:22:00.621311  278987 main.go:143] libmachine: (addons-305823) Calling .GetSSHHostname
	I1019 16:22:00.624645  278987 main.go:143] libmachine: (addons-305823) DBG | domain addons-305823 has defined MAC address 52:54:00:48:d4:0b in network mk-addons-305823
	I1019 16:22:00.625083  278987 main.go:143] libmachine: (addons-305823) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:d4:0b", ip: ""} in network mk-addons-305823: {Iface:virbr1 ExpiryTime:2025-10-19 17:21:58 +0000 UTC Type:0 Mac:52:54:00:48:d4:0b Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:addons-305823 Clientid:01:52:54:00:48:d4:0b}
	I1019 16:22:00.625110  278987 main.go:143] libmachine: (addons-305823) DBG | domain addons-305823 has defined IP address 192.168.39.11 and MAC address 52:54:00:48:d4:0b in network mk-addons-305823
	I1019 16:22:00.625365  278987 main.go:143] libmachine: (addons-305823) Calling .GetSSHPort
	I1019 16:22:00.625605  278987 main.go:143] libmachine: (addons-305823) Calling .GetSSHKeyPath
	I1019 16:22:00.625774  278987 main.go:143] libmachine: (addons-305823) Calling .GetSSHKeyPath
	I1019 16:22:00.625926  278987 main.go:143] libmachine: (addons-305823) Calling .GetSSHUsername
	I1019 16:22:00.626111  278987 main.go:143] libmachine: Using SSH client type: native
	I1019 16:22:00.626388  278987 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.11 22 <nil> <nil>}
	I1019 16:22:00.626404  278987 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1019 16:22:00.859973  278987 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1019 16:22:00.860037  278987 main.go:143] libmachine: Checking connection to Docker...
	I1019 16:22:00.860050  278987 main.go:143] libmachine: (addons-305823) Calling .GetURL
	I1019 16:22:00.861341  278987 main.go:143] libmachine: (addons-305823) DBG | using libvirt version 8000000
	I1019 16:22:00.864514  278987 main.go:143] libmachine: (addons-305823) DBG | domain addons-305823 has defined MAC address 52:54:00:48:d4:0b in network mk-addons-305823
	I1019 16:22:00.865314  278987 main.go:143] libmachine: (addons-305823) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:d4:0b", ip: ""} in network mk-addons-305823: {Iface:virbr1 ExpiryTime:2025-10-19 17:21:58 +0000 UTC Type:0 Mac:52:54:00:48:d4:0b Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:addons-305823 Clientid:01:52:54:00:48:d4:0b}
	I1019 16:22:00.865341  278987 main.go:143] libmachine: (addons-305823) DBG | domain addons-305823 has defined IP address 192.168.39.11 and MAC address 52:54:00:48:d4:0b in network mk-addons-305823
	I1019 16:22:00.865571  278987 main.go:143] libmachine: Docker is up and running!
	I1019 16:22:00.865583  278987 main.go:143] libmachine: Reticulating splines...
	I1019 16:22:00.865591  278987 client.go:174] duration metric: took 17.599565918s to LocalClient.Create
	I1019 16:22:00.865612  278987 start.go:167] duration metric: took 17.599645307s to libmachine.API.Create "addons-305823"
	I1019 16:22:00.865621  278987 start.go:293] postStartSetup for "addons-305823" (driver="kvm2")
	I1019 16:22:00.865630  278987 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1019 16:22:00.865646  278987 main.go:143] libmachine: (addons-305823) Calling .DriverName
	I1019 16:22:00.865879  278987 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1019 16:22:00.865922  278987 main.go:143] libmachine: (addons-305823) Calling .GetSSHHostname
	I1019 16:22:00.868434  278987 main.go:143] libmachine: (addons-305823) DBG | domain addons-305823 has defined MAC address 52:54:00:48:d4:0b in network mk-addons-305823
	I1019 16:22:00.868797  278987 main.go:143] libmachine: (addons-305823) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:d4:0b", ip: ""} in network mk-addons-305823: {Iface:virbr1 ExpiryTime:2025-10-19 17:21:58 +0000 UTC Type:0 Mac:52:54:00:48:d4:0b Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:addons-305823 Clientid:01:52:54:00:48:d4:0b}
	I1019 16:22:00.868825  278987 main.go:143] libmachine: (addons-305823) DBG | domain addons-305823 has defined IP address 192.168.39.11 and MAC address 52:54:00:48:d4:0b in network mk-addons-305823
	I1019 16:22:00.868968  278987 main.go:143] libmachine: (addons-305823) Calling .GetSSHPort
	I1019 16:22:00.869159  278987 main.go:143] libmachine: (addons-305823) Calling .GetSSHKeyPath
	I1019 16:22:00.869320  278987 main.go:143] libmachine: (addons-305823) Calling .GetSSHUsername
	I1019 16:22:00.869438  278987 sshutil.go:53] new ssh client: &{IP:192.168.39.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21683-274250/.minikube/machines/addons-305823/id_rsa Username:docker}
	I1019 16:22:00.953634  278987 ssh_runner.go:195] Run: cat /etc/os-release
	I1019 16:22:00.958201  278987 info.go:137] Remote host: Buildroot 2025.02
	I1019 16:22:00.958251  278987 filesync.go:126] Scanning /home/jenkins/minikube-integration/21683-274250/.minikube/addons for local assets ...
	I1019 16:22:00.958336  278987 filesync.go:126] Scanning /home/jenkins/minikube-integration/21683-274250/.minikube/files for local assets ...
	I1019 16:22:00.958367  278987 start.go:296] duration metric: took 92.741488ms for postStartSetup
	I1019 16:22:00.958412  278987 main.go:143] libmachine: (addons-305823) Calling .GetConfigRaw
	I1019 16:22:00.959062  278987 main.go:143] libmachine: (addons-305823) Calling .GetIP
	I1019 16:22:00.962075  278987 main.go:143] libmachine: (addons-305823) DBG | domain addons-305823 has defined MAC address 52:54:00:48:d4:0b in network mk-addons-305823
	I1019 16:22:00.962560  278987 main.go:143] libmachine: (addons-305823) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:d4:0b", ip: ""} in network mk-addons-305823: {Iface:virbr1 ExpiryTime:2025-10-19 17:21:58 +0000 UTC Type:0 Mac:52:54:00:48:d4:0b Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:addons-305823 Clientid:01:52:54:00:48:d4:0b}
	I1019 16:22:00.962596  278987 main.go:143] libmachine: (addons-305823) DBG | domain addons-305823 has defined IP address 192.168.39.11 and MAC address 52:54:00:48:d4:0b in network mk-addons-305823
	I1019 16:22:00.962877  278987 profile.go:143] Saving config to /home/jenkins/minikube-integration/21683-274250/.minikube/profiles/addons-305823/config.json ...
	I1019 16:22:00.963091  278987 start.go:128] duration metric: took 17.714391899s to createHost
	I1019 16:22:00.963116  278987 main.go:143] libmachine: (addons-305823) Calling .GetSSHHostname
	I1019 16:22:00.965652  278987 main.go:143] libmachine: (addons-305823) DBG | domain addons-305823 has defined MAC address 52:54:00:48:d4:0b in network mk-addons-305823
	I1019 16:22:00.966103  278987 main.go:143] libmachine: (addons-305823) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:d4:0b", ip: ""} in network mk-addons-305823: {Iface:virbr1 ExpiryTime:2025-10-19 17:21:58 +0000 UTC Type:0 Mac:52:54:00:48:d4:0b Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:addons-305823 Clientid:01:52:54:00:48:d4:0b}
	I1019 16:22:00.966133  278987 main.go:143] libmachine: (addons-305823) DBG | domain addons-305823 has defined IP address 192.168.39.11 and MAC address 52:54:00:48:d4:0b in network mk-addons-305823
	I1019 16:22:00.966317  278987 main.go:143] libmachine: (addons-305823) Calling .GetSSHPort
	I1019 16:22:00.966494  278987 main.go:143] libmachine: (addons-305823) Calling .GetSSHKeyPath
	I1019 16:22:00.966621  278987 main.go:143] libmachine: (addons-305823) Calling .GetSSHKeyPath
	I1019 16:22:00.966732  278987 main.go:143] libmachine: (addons-305823) Calling .GetSSHUsername
	I1019 16:22:00.966919  278987 main.go:143] libmachine: Using SSH client type: native
	I1019 16:22:00.967157  278987 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.11 22 <nil> <nil>}
	I1019 16:22:00.967170  278987 main.go:143] libmachine: About to run SSH command:
	date +%s.%N
	I1019 16:22:01.074760  278987 main.go:143] libmachine: SSH cmd err, output: <nil>: 1760890921.037418260
	
	I1019 16:22:01.074782  278987 fix.go:216] guest clock: 1760890921.037418260
	I1019 16:22:01.074790  278987 fix.go:229] Guest: 2025-10-19 16:22:01.03741826 +0000 UTC Remote: 2025-10-19 16:22:00.963104514 +0000 UTC m=+17.828332828 (delta=74.313746ms)
	I1019 16:22:01.074810  278987 fix.go:200] guest clock delta is within tolerance: 74.313746ms
	I1019 16:22:01.074814  278987 start.go:83] releasing machines lock for "addons-305823", held for 17.826196548s
	I1019 16:22:01.074835  278987 main.go:143] libmachine: (addons-305823) Calling .DriverName
	I1019 16:22:01.075090  278987 main.go:143] libmachine: (addons-305823) Calling .GetIP
	I1019 16:22:01.078297  278987 main.go:143] libmachine: (addons-305823) DBG | domain addons-305823 has defined MAC address 52:54:00:48:d4:0b in network mk-addons-305823
	I1019 16:22:01.078701  278987 main.go:143] libmachine: (addons-305823) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:d4:0b", ip: ""} in network mk-addons-305823: {Iface:virbr1 ExpiryTime:2025-10-19 17:21:58 +0000 UTC Type:0 Mac:52:54:00:48:d4:0b Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:addons-305823 Clientid:01:52:54:00:48:d4:0b}
	I1019 16:22:01.078729  278987 main.go:143] libmachine: (addons-305823) DBG | domain addons-305823 has defined IP address 192.168.39.11 and MAC address 52:54:00:48:d4:0b in network mk-addons-305823
	I1019 16:22:01.078851  278987 main.go:143] libmachine: (addons-305823) Calling .DriverName
	I1019 16:22:01.079333  278987 main.go:143] libmachine: (addons-305823) Calling .DriverName
	I1019 16:22:01.079570  278987 main.go:143] libmachine: (addons-305823) Calling .DriverName
	I1019 16:22:01.079709  278987 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1019 16:22:01.079758  278987 main.go:143] libmachine: (addons-305823) Calling .GetSSHHostname
	I1019 16:22:01.079788  278987 ssh_runner.go:195] Run: cat /version.json
	I1019 16:22:01.079815  278987 main.go:143] libmachine: (addons-305823) Calling .GetSSHHostname
	I1019 16:22:01.082825  278987 main.go:143] libmachine: (addons-305823) DBG | domain addons-305823 has defined MAC address 52:54:00:48:d4:0b in network mk-addons-305823
	I1019 16:22:01.083022  278987 main.go:143] libmachine: (addons-305823) DBG | domain addons-305823 has defined MAC address 52:54:00:48:d4:0b in network mk-addons-305823
	I1019 16:22:01.083299  278987 main.go:143] libmachine: (addons-305823) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:d4:0b", ip: ""} in network mk-addons-305823: {Iface:virbr1 ExpiryTime:2025-10-19 17:21:58 +0000 UTC Type:0 Mac:52:54:00:48:d4:0b Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:addons-305823 Clientid:01:52:54:00:48:d4:0b}
	I1019 16:22:01.083320  278987 main.go:143] libmachine: (addons-305823) DBG | domain addons-305823 has defined IP address 192.168.39.11 and MAC address 52:54:00:48:d4:0b in network mk-addons-305823
	I1019 16:22:01.083344  278987 main.go:143] libmachine: (addons-305823) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:d4:0b", ip: ""} in network mk-addons-305823: {Iface:virbr1 ExpiryTime:2025-10-19 17:21:58 +0000 UTC Type:0 Mac:52:54:00:48:d4:0b Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:addons-305823 Clientid:01:52:54:00:48:d4:0b}
	I1019 16:22:01.083371  278987 main.go:143] libmachine: (addons-305823) DBG | domain addons-305823 has defined IP address 192.168.39.11 and MAC address 52:54:00:48:d4:0b in network mk-addons-305823
	I1019 16:22:01.083504  278987 main.go:143] libmachine: (addons-305823) Calling .GetSSHPort
	I1019 16:22:01.083672  278987 main.go:143] libmachine: (addons-305823) Calling .GetSSHKeyPath
	I1019 16:22:01.083758  278987 main.go:143] libmachine: (addons-305823) Calling .GetSSHPort
	I1019 16:22:01.083830  278987 main.go:143] libmachine: (addons-305823) Calling .GetSSHUsername
	I1019 16:22:01.083880  278987 main.go:143] libmachine: (addons-305823) Calling .GetSSHKeyPath
	I1019 16:22:01.083954  278987 sshutil.go:53] new ssh client: &{IP:192.168.39.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21683-274250/.minikube/machines/addons-305823/id_rsa Username:docker}
	I1019 16:22:01.084043  278987 main.go:143] libmachine: (addons-305823) Calling .GetSSHUsername
	I1019 16:22:01.084158  278987 sshutil.go:53] new ssh client: &{IP:192.168.39.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21683-274250/.minikube/machines/addons-305823/id_rsa Username:docker}
	I1019 16:22:01.191355  278987 ssh_runner.go:195] Run: systemctl --version
	I1019 16:22:01.197856  278987 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1019 16:22:01.351780  278987 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1019 16:22:01.358400  278987 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1019 16:22:01.358484  278987 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1019 16:22:01.377517  278987 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1019 16:22:01.377537  278987 start.go:496] detecting cgroup driver to use...
	I1019 16:22:01.377607  278987 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1019 16:22:01.395309  278987 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1019 16:22:01.410590  278987 docker.go:218] disabling cri-docker service (if available) ...
	I1019 16:22:01.410629  278987 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1019 16:22:01.426427  278987 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1019 16:22:01.441081  278987 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1019 16:22:01.597759  278987 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1019 16:22:01.803728  278987 docker.go:234] disabling docker service ...
	I1019 16:22:01.803795  278987 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1019 16:22:01.823070  278987 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1019 16:22:01.836824  278987 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1019 16:22:01.991532  278987 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1019 16:22:02.136914  278987 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1019 16:22:02.152496  278987 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1019 16:22:02.175734  278987 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1019 16:22:02.175802  278987 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 16:22:02.187641  278987 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1019 16:22:02.187725  278987 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 16:22:02.199518  278987 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 16:22:02.211300  278987 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 16:22:02.223543  278987 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1019 16:22:02.237343  278987 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 16:22:02.248995  278987 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 16:22:02.267720  278987 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 16:22:02.284754  278987 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1019 16:22:02.296694  278987 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1019 16:22:02.296766  278987 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1019 16:22:02.317771  278987 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1019 16:22:02.328684  278987 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1019 16:22:02.466462  278987 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1019 16:22:02.695391  278987 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1019 16:22:02.695508  278987 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1019 16:22:02.700810  278987 start.go:564] Will wait 60s for crictl version
	I1019 16:22:02.700910  278987 ssh_runner.go:195] Run: which crictl
	I1019 16:22:02.704887  278987 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1019 16:22:02.742159  278987 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1019 16:22:02.742293  278987 ssh_runner.go:195] Run: crio --version
	I1019 16:22:02.771669  278987 ssh_runner.go:195] Run: crio --version
	I1019 16:22:02.814884  278987 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.29.1 ...
	I1019 16:22:02.815858  278987 main.go:143] libmachine: (addons-305823) Calling .GetIP
	I1019 16:22:02.818913  278987 main.go:143] libmachine: (addons-305823) DBG | domain addons-305823 has defined MAC address 52:54:00:48:d4:0b in network mk-addons-305823
	I1019 16:22:02.819304  278987 main.go:143] libmachine: (addons-305823) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:d4:0b", ip: ""} in network mk-addons-305823: {Iface:virbr1 ExpiryTime:2025-10-19 17:21:58 +0000 UTC Type:0 Mac:52:54:00:48:d4:0b Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:addons-305823 Clientid:01:52:54:00:48:d4:0b}
	I1019 16:22:02.819332  278987 main.go:143] libmachine: (addons-305823) DBG | domain addons-305823 has defined IP address 192.168.39.11 and MAC address 52:54:00:48:d4:0b in network mk-addons-305823
	I1019 16:22:02.819599  278987 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1019 16:22:02.824109  278987 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1019 16:22:02.838539  278987 kubeadm.go:884] updating cluster {Name:addons-305823 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.
1 ClusterName:addons-305823 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.11 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disa
bleOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1019 16:22:02.838670  278987 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1019 16:22:02.838777  278987 ssh_runner.go:195] Run: sudo crictl images --output json
	I1019 16:22:02.869476  278987 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.34.1". assuming images are not preloaded.
	I1019 16:22:02.869557  278987 ssh_runner.go:195] Run: which lz4
	I1019 16:22:02.873238  278987 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1019 16:22:02.877572  278987 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1019 16:22:02.877600  278987 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-274250/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (409477533 bytes)
	I1019 16:22:04.239806  278987 crio.go:462] duration metric: took 1.366606293s to copy over tarball
	I1019 16:22:04.239903  278987 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1019 16:22:05.798907  278987 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.558967339s)
	I1019 16:22:05.798941  278987 crio.go:469] duration metric: took 1.559102322s to extract the tarball
	I1019 16:22:05.798974  278987 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1019 16:22:05.839410  278987 ssh_runner.go:195] Run: sudo crictl images --output json
	I1019 16:22:05.884576  278987 crio.go:514] all images are preloaded for cri-o runtime.
	I1019 16:22:05.884608  278987 cache_images.go:86] Images are preloaded, skipping loading
	I1019 16:22:05.884619  278987 kubeadm.go:935] updating node { 192.168.39.11 8443 v1.34.1 crio true true} ...
	I1019 16:22:05.884782  278987 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-305823 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.11
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:addons-305823 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1019 16:22:05.884872  278987 ssh_runner.go:195] Run: crio config
	I1019 16:22:05.929062  278987 cni.go:84] Creating CNI manager for ""
	I1019 16:22:05.929082  278987 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1019 16:22:05.929099  278987 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1019 16:22:05.929120  278987 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.11 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-305823 NodeName:addons-305823 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.11"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.11 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kube
rnetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1019 16:22:05.929241  278987 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.11
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-305823"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.11"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.11"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1019 16:22:05.929307  278987 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1019 16:22:05.941649  278987 binaries.go:44] Found k8s binaries, skipping transfer
	I1019 16:22:05.941708  278987 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1019 16:22:05.953311  278987 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I1019 16:22:05.972558  278987 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1019 16:22:05.990558  278987 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2213 bytes)
	I1019 16:22:06.009076  278987 ssh_runner.go:195] Run: grep 192.168.39.11	control-plane.minikube.internal$ /etc/hosts
	I1019 16:22:06.012791  278987 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.11	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1019 16:22:06.025799  278987 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1019 16:22:06.160827  278987 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1019 16:22:06.181438  278987 certs.go:69] Setting up /home/jenkins/minikube-integration/21683-274250/.minikube/profiles/addons-305823 for IP: 192.168.39.11
	I1019 16:22:06.181466  278987 certs.go:195] generating shared ca certs ...
	I1019 16:22:06.181513  278987 certs.go:227] acquiring lock for ca certs: {Name:mk7795547103f90561160e6fc6ada1c3a2cc6617 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 16:22:06.181677  278987 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21683-274250/.minikube/ca.key
	I1019 16:22:06.426023  278987 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21683-274250/.minikube/ca.crt ...
	I1019 16:22:06.426053  278987 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-274250/.minikube/ca.crt: {Name:mkd814c34f656536ed52a6af75024477f9bceee2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 16:22:06.426224  278987 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21683-274250/.minikube/ca.key ...
	I1019 16:22:06.426234  278987 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-274250/.minikube/ca.key: {Name:mke5d1254b0cf6d0b1457f516acd27383ce78b58 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 16:22:06.426321  278987 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21683-274250/.minikube/proxy-client-ca.key
	I1019 16:22:07.043001  278987 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21683-274250/.minikube/proxy-client-ca.crt ...
	I1019 16:22:07.043030  278987 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-274250/.minikube/proxy-client-ca.crt: {Name:mk883ac54dc3ff3d7297a995225ac41d288fa9fe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 16:22:07.043945  278987 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21683-274250/.minikube/proxy-client-ca.key ...
	I1019 16:22:07.043962  278987 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-274250/.minikube/proxy-client-ca.key: {Name:mk95865abf9cadee8cb352d2b4b4e1ecb4f75038 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 16:22:07.044079  278987 certs.go:257] generating profile certs ...
	I1019 16:22:07.044145  278987 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21683-274250/.minikube/profiles/addons-305823/client.key
	I1019 16:22:07.044172  278987 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21683-274250/.minikube/profiles/addons-305823/client.crt with IP's: []
	I1019 16:22:08.056703  278987 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21683-274250/.minikube/profiles/addons-305823/client.crt ...
	I1019 16:22:08.056740  278987 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-274250/.minikube/profiles/addons-305823/client.crt: {Name:mk5bf8bff60ae39be2c5e4922ff754cbc13d5709 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 16:22:08.056913  278987 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21683-274250/.minikube/profiles/addons-305823/client.key ...
	I1019 16:22:08.056924  278987 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-274250/.minikube/profiles/addons-305823/client.key: {Name:mkccc026714d7fe500ecddfcc8d5394388373149 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 16:22:08.057009  278987 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21683-274250/.minikube/profiles/addons-305823/apiserver.key.f54322c0
	I1019 16:22:08.057029  278987 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21683-274250/.minikube/profiles/addons-305823/apiserver.crt.f54322c0 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.11]
	I1019 16:22:08.103720  278987 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21683-274250/.minikube/profiles/addons-305823/apiserver.crt.f54322c0 ...
	I1019 16:22:08.103744  278987 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-274250/.minikube/profiles/addons-305823/apiserver.crt.f54322c0: {Name:mke92b6e40ac9ecd7b63f032e0e0c48a6538fce2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 16:22:08.104485  278987 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21683-274250/.minikube/profiles/addons-305823/apiserver.key.f54322c0 ...
	I1019 16:22:08.104501  278987 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-274250/.minikube/profiles/addons-305823/apiserver.key.f54322c0: {Name:mkcdff33db7dd8cbd10c50b15b62fec07781559a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 16:22:08.104582  278987 certs.go:382] copying /home/jenkins/minikube-integration/21683-274250/.minikube/profiles/addons-305823/apiserver.crt.f54322c0 -> /home/jenkins/minikube-integration/21683-274250/.minikube/profiles/addons-305823/apiserver.crt
	I1019 16:22:08.104688  278987 certs.go:386] copying /home/jenkins/minikube-integration/21683-274250/.minikube/profiles/addons-305823/apiserver.key.f54322c0 -> /home/jenkins/minikube-integration/21683-274250/.minikube/profiles/addons-305823/apiserver.key
	I1019 16:22:08.104746  278987 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21683-274250/.minikube/profiles/addons-305823/proxy-client.key
	I1019 16:22:08.104766  278987 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21683-274250/.minikube/profiles/addons-305823/proxy-client.crt with IP's: []
	I1019 16:22:08.332959  278987 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21683-274250/.minikube/profiles/addons-305823/proxy-client.crt ...
	I1019 16:22:08.332992  278987 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-274250/.minikube/profiles/addons-305823/proxy-client.crt: {Name:mk0d3a8ab6871d91f3ebac3c7b6f3c1ed3de8d27 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 16:22:08.333752  278987 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21683-274250/.minikube/profiles/addons-305823/proxy-client.key ...
	I1019 16:22:08.333773  278987 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-274250/.minikube/profiles/addons-305823/proxy-client.key: {Name:mkc14608128a9dff751677d99ec72cab782ed8d5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 16:22:08.334327  278987 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-274250/.minikube/certs/ca-key.pem (1675 bytes)
	I1019 16:22:08.334365  278987 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-274250/.minikube/certs/ca.pem (1082 bytes)
	I1019 16:22:08.334389  278987 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-274250/.minikube/certs/cert.pem (1123 bytes)
	I1019 16:22:08.334411  278987 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-274250/.minikube/certs/key.pem (1675 bytes)
	I1019 16:22:08.335039  278987 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-274250/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1019 16:22:08.367054  278987 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-274250/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1019 16:22:08.397136  278987 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-274250/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1019 16:22:08.424060  278987 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-274250/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1019 16:22:08.450832  278987 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-274250/.minikube/profiles/addons-305823/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1019 16:22:08.478367  278987 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-274250/.minikube/profiles/addons-305823/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1019 16:22:08.505710  278987 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-274250/.minikube/profiles/addons-305823/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1019 16:22:08.531719  278987 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-274250/.minikube/profiles/addons-305823/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1019 16:22:08.558446  278987 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-274250/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1019 16:22:08.584731  278987 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1019 16:22:08.603375  278987 ssh_runner.go:195] Run: openssl version
	I1019 16:22:08.609146  278987 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1019 16:22:08.621282  278987 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1019 16:22:08.625843  278987 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 19 16:22 /usr/share/ca-certificates/minikubeCA.pem
	I1019 16:22:08.625895  278987 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1019 16:22:08.632397  278987 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1019 16:22:08.644271  278987 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1019 16:22:08.648764  278987 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1019 16:22:08.648813  278987 kubeadm.go:401] StartCluster: {Name:addons-305823 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 C
lusterName:addons-305823 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.11 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disable
Optimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1019 16:22:08.648891  278987 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1019 16:22:08.648931  278987 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1019 16:22:08.726255  278987 cri.go:89] found id: ""
	I1019 16:22:08.726321  278987 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1019 16:22:08.739907  278987 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1019 16:22:08.755659  278987 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1019 16:22:08.767357  278987 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1019 16:22:08.767377  278987 kubeadm.go:158] found existing configuration files:
	
	I1019 16:22:08.767420  278987 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1019 16:22:08.777833  278987 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1019 16:22:08.777899  278987 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1019 16:22:08.788698  278987 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1019 16:22:08.798618  278987 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1019 16:22:08.798692  278987 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1019 16:22:08.809214  278987 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1019 16:22:08.818911  278987 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1019 16:22:08.818965  278987 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1019 16:22:08.829446  278987 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1019 16:22:08.839239  278987 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1019 16:22:08.839297  278987 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1019 16:22:08.849603  278987 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1019 16:22:08.983217  278987 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1019 16:22:19.322068  278987 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1019 16:22:19.322140  278987 kubeadm.go:319] [preflight] Running pre-flight checks
	I1019 16:22:19.322237  278987 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1019 16:22:19.322330  278987 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1019 16:22:19.322436  278987 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1019 16:22:19.322510  278987 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1019 16:22:19.323796  278987 out.go:252]   - Generating certificates and keys ...
	I1019 16:22:19.323878  278987 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1019 16:22:19.323953  278987 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1019 16:22:19.324049  278987 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1019 16:22:19.324146  278987 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1019 16:22:19.324200  278987 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1019 16:22:19.324279  278987 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1019 16:22:19.324345  278987 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1019 16:22:19.324445  278987 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [addons-305823 localhost] and IPs [192.168.39.11 127.0.0.1 ::1]
	I1019 16:22:19.324502  278987 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1019 16:22:19.324647  278987 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [addons-305823 localhost] and IPs [192.168.39.11 127.0.0.1 ::1]
	I1019 16:22:19.324724  278987 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1019 16:22:19.324776  278987 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1019 16:22:19.324812  278987 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1019 16:22:19.324861  278987 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1019 16:22:19.324903  278987 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1019 16:22:19.324955  278987 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1019 16:22:19.325031  278987 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1019 16:22:19.325090  278987 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1019 16:22:19.325137  278987 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1019 16:22:19.325207  278987 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1019 16:22:19.325296  278987 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1019 16:22:19.327257  278987 out.go:252]   - Booting up control plane ...
	I1019 16:22:19.327382  278987 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1019 16:22:19.327511  278987 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1019 16:22:19.327614  278987 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1019 16:22:19.327753  278987 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1019 16:22:19.327851  278987 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1019 16:22:19.328025  278987 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1019 16:22:19.328146  278987 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1019 16:22:19.328186  278987 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1019 16:22:19.328362  278987 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1019 16:22:19.328496  278987 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1019 16:22:19.328591  278987 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 501.064565ms
	I1019 16:22:19.328726  278987 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1019 16:22:19.328844  278987 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.39.11:8443/livez
	I1019 16:22:19.328972  278987 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1019 16:22:19.329065  278987 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1019 16:22:19.329189  278987 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 2.046925683s
	I1019 16:22:19.329289  278987 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 3.441312819s
	I1019 16:22:19.329369  278987 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 5.001646839s
	I1019 16:22:19.329470  278987 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1019 16:22:19.329617  278987 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1019 16:22:19.329679  278987 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1019 16:22:19.329900  278987 kubeadm.go:319] [mark-control-plane] Marking the node addons-305823 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1019 16:22:19.329971  278987 kubeadm.go:319] [bootstrap-token] Using token: 9msnkt.1iehnne46qh649bc
	I1019 16:22:19.331275  278987 out.go:252]   - Configuring RBAC rules ...
	I1019 16:22:19.331383  278987 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1019 16:22:19.331484  278987 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1019 16:22:19.331656  278987 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1019 16:22:19.331845  278987 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1019 16:22:19.331938  278987 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1019 16:22:19.332083  278987 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1019 16:22:19.332267  278987 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1019 16:22:19.332309  278987 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1019 16:22:19.332370  278987 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1019 16:22:19.332377  278987 kubeadm.go:319] 
	I1019 16:22:19.332450  278987 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1019 16:22:19.332461  278987 kubeadm.go:319] 
	I1019 16:22:19.332566  278987 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1019 16:22:19.332576  278987 kubeadm.go:319] 
	I1019 16:22:19.332598  278987 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1019 16:22:19.332654  278987 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1019 16:22:19.332698  278987 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1019 16:22:19.332704  278987 kubeadm.go:319] 
	I1019 16:22:19.332745  278987 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1019 16:22:19.332751  278987 kubeadm.go:319] 
	I1019 16:22:19.332795  278987 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1019 16:22:19.332801  278987 kubeadm.go:319] 
	I1019 16:22:19.332851  278987 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1019 16:22:19.332925  278987 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1019 16:22:19.333029  278987 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1019 16:22:19.333040  278987 kubeadm.go:319] 
	I1019 16:22:19.333115  278987 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1019 16:22:19.333177  278987 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1019 16:22:19.333182  278987 kubeadm.go:319] 
	I1019 16:22:19.333271  278987 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token 9msnkt.1iehnne46qh649bc \
	I1019 16:22:19.333385  278987 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:eeae2241ed136642b8a30b4c2e7cb5956bd5d5b768aacf86a405f1f1b1fcf52f \
	I1019 16:22:19.333410  278987 kubeadm.go:319] 	--control-plane 
	I1019 16:22:19.333413  278987 kubeadm.go:319] 
	I1019 16:22:19.333517  278987 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1019 16:22:19.333526  278987 kubeadm.go:319] 
	I1019 16:22:19.333643  278987 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token 9msnkt.1iehnne46qh649bc \
	I1019 16:22:19.333744  278987 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:eeae2241ed136642b8a30b4c2e7cb5956bd5d5b768aacf86a405f1f1b1fcf52f 
	I1019 16:22:19.333755  278987 cni.go:84] Creating CNI manager for ""
	I1019 16:22:19.333770  278987 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1019 16:22:19.335803  278987 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I1019 16:22:19.336720  278987 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1019 16:22:19.350119  278987 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1019 16:22:19.372326  278987 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1019 16:22:19.372450  278987 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-305823 minikube.k8s.io/updated_at=2025_10_19T16_22_19_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=e20882874ea1ac33516421f13ca0f7def6fb6b34 minikube.k8s.io/name=addons-305823 minikube.k8s.io/primary=true
	I1019 16:22:19.372455  278987 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 16:22:19.397457  278987 ops.go:34] apiserver oom_adj: -16
	I1019 16:22:19.500120  278987 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 16:22:20.000963  278987 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 16:22:20.501113  278987 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 16:22:21.000485  278987 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 16:22:21.501275  278987 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 16:22:22.000482  278987 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 16:22:22.501252  278987 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 16:22:23.001111  278987 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 16:22:23.500873  278987 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 16:22:24.000199  278987 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 16:22:24.500869  278987 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1019 16:22:24.584592  278987 kubeadm.go:1114] duration metric: took 5.21221195s to wait for elevateKubeSystemPrivileges
	I1019 16:22:24.584649  278987 kubeadm.go:403] duration metric: took 15.935841348s to StartCluster
	I1019 16:22:24.584673  278987 settings.go:142] acquiring lock: {Name:mkf8e8333d0302d1bf1fad4a2ff30b0524cb52b1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 16:22:24.584821  278987 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21683-274250/kubeconfig
	I1019 16:22:24.585313  278987 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-274250/kubeconfig: {Name:mk22311d445eddc7a50c63a1389fab4cf9c803b2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 16:22:24.585543  278987 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1019 16:22:24.585579  278987 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.39.11 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1019 16:22:24.585634  278987 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1019 16:22:24.585816  278987 addons.go:70] Setting gcp-auth=true in profile "addons-305823"
	I1019 16:22:24.585837  278987 addons.go:70] Setting yakd=true in profile "addons-305823"
	I1019 16:22:24.585843  278987 config.go:182] Loaded profile config "addons-305823": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 16:22:24.585860  278987 mustload.go:66] Loading cluster: addons-305823
	I1019 16:22:24.585866  278987 addons.go:70] Setting volcano=true in profile "addons-305823"
	I1019 16:22:24.585858  278987 addons.go:239] Setting addon yakd=true in "addons-305823"
	I1019 16:22:24.585918  278987 addons.go:70] Setting inspektor-gadget=true in profile "addons-305823"
	I1019 16:22:24.585930  278987 addons.go:239] Setting addon inspektor-gadget=true in "addons-305823"
	I1019 16:22:24.585955  278987 host.go:66] Checking if "addons-305823" exists ...
	I1019 16:22:24.585968  278987 addons.go:70] Setting ingress=true in profile "addons-305823"
	I1019 16:22:24.585969  278987 addons.go:70] Setting storage-provisioner-rancher=true in profile "addons-305823"
	I1019 16:22:24.586005  278987 addons.go:70] Setting ingress-dns=true in profile "addons-305823"
	I1019 16:22:24.585969  278987 addons.go:70] Setting registry=true in profile "addons-305823"
	I1019 16:22:24.586019  278987 addons.go:239] Setting addon ingress-dns=true in "addons-305823"
	I1019 16:22:24.586029  278987 addons.go:239] Setting addon registry=true in "addons-305823"
	I1019 16:22:24.586030  278987 addons_storage_classes.go:34] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-305823"
	I1019 16:22:24.586062  278987 host.go:66] Checking if "addons-305823" exists ...
	I1019 16:22:24.586080  278987 host.go:66] Checking if "addons-305823" exists ...
	I1019 16:22:24.586093  278987 config.go:182] Loaded profile config "addons-305823": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 16:22:24.586091  278987 addons.go:70] Setting registry-creds=true in profile "addons-305823"
	I1019 16:22:24.586146  278987 addons.go:239] Setting addon registry-creds=true in "addons-305823"
	I1019 16:22:24.586201  278987 host.go:66] Checking if "addons-305823" exists ...
	I1019 16:22:24.585907  278987 addons.go:70] Setting volumesnapshots=true in profile "addons-305823"
	I1019 16:22:24.586224  278987 addons.go:239] Setting addon volumesnapshots=true in "addons-305823"
	I1019 16:22:24.586247  278987 host.go:66] Checking if "addons-305823" exists ...
	I1019 16:22:24.586543  278987 main.go:143] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1019 16:22:24.586569  278987 main.go:143] libmachine: Launching plugin server for driver kvm2
	I1019 16:22:24.586586  278987 main.go:143] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1019 16:22:24.586608  278987 main.go:143] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1019 16:22:24.586616  278987 main.go:143] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1019 16:22:24.586628  278987 addons.go:70] Setting nvidia-device-plugin=true in profile "addons-305823"
	I1019 16:22:24.585900  278987 addons.go:239] Setting addon volcano=true in "addons-305823"
	I1019 16:22:24.586639  278987 addons.go:239] Setting addon nvidia-device-plugin=true in "addons-305823"
	I1019 16:22:24.586645  278987 main.go:143] libmachine: Launching plugin server for driver kvm2
	I1019 16:22:24.586661  278987 host.go:66] Checking if "addons-305823" exists ...
	I1019 16:22:24.586662  278987 host.go:66] Checking if "addons-305823" exists ...
	I1019 16:22:24.586668  278987 main.go:143] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1019 16:22:24.586709  278987 main.go:143] libmachine: Launching plugin server for driver kvm2
	I1019 16:22:24.586710  278987 main.go:143] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1019 16:22:24.586743  278987 main.go:143] libmachine: Launching plugin server for driver kvm2
	I1019 16:22:24.586761  278987 main.go:143] libmachine: Launching plugin server for driver kvm2
	I1019 16:22:24.585997  278987 addons.go:239] Setting addon ingress=true in "addons-305823"
	I1019 16:22:24.586844  278987 addons.go:70] Setting storage-provisioner=true in profile "addons-305823"
	I1019 16:22:24.586854  278987 addons.go:239] Setting addon storage-provisioner=true in "addons-305823"
	I1019 16:22:24.586861  278987 addons.go:70] Setting cloud-spanner=true in profile "addons-305823"
	I1019 16:22:24.586876  278987 addons.go:239] Setting addon cloud-spanner=true in "addons-305823"
	I1019 16:22:24.586882  278987 addons.go:70] Setting amd-gpu-device-plugin=true in profile "addons-305823"
	I1019 16:22:24.586895  278987 addons.go:239] Setting addon amd-gpu-device-plugin=true in "addons-305823"
	I1019 16:22:24.586910  278987 addons.go:70] Setting csi-hostpath-driver=true in profile "addons-305823"
	I1019 16:22:24.586963  278987 addons.go:239] Setting addon csi-hostpath-driver=true in "addons-305823"
	I1019 16:22:24.587005  278987 main.go:143] libmachine: Launching plugin server for driver kvm2
	I1019 16:22:24.587029  278987 addons.go:70] Setting default-storageclass=true in profile "addons-305823"
	I1019 16:22:24.585960  278987 host.go:66] Checking if "addons-305823" exists ...
	I1019 16:22:24.587056  278987 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "addons-305823"
	I1019 16:22:24.586620  278987 addons.go:70] Setting metrics-server=true in profile "addons-305823"
	I1019 16:22:24.587104  278987 addons.go:239] Setting addon metrics-server=true in "addons-305823"
	I1019 16:22:24.587196  278987 host.go:66] Checking if "addons-305823" exists ...
	I1019 16:22:24.587212  278987 main.go:143] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1019 16:22:24.587241  278987 main.go:143] libmachine: Launching plugin server for driver kvm2
	I1019 16:22:24.587301  278987 host.go:66] Checking if "addons-305823" exists ...
	I1019 16:22:24.587466  278987 main.go:143] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1019 16:22:24.587497  278987 main.go:143] libmachine: Launching plugin server for driver kvm2
	I1019 16:22:24.587559  278987 main.go:143] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1019 16:22:24.587564  278987 host.go:66] Checking if "addons-305823" exists ...
	I1019 16:22:24.587602  278987 main.go:143] libmachine: Launching plugin server for driver kvm2
	I1019 16:22:24.587662  278987 main.go:143] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1019 16:22:24.587688  278987 main.go:143] libmachine: Launching plugin server for driver kvm2
	I1019 16:22:24.587842  278987 host.go:66] Checking if "addons-305823" exists ...
	I1019 16:22:24.587854  278987 out.go:179] * Verifying Kubernetes components...
	I1019 16:22:24.587939  278987 main.go:143] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1019 16:22:24.587965  278987 main.go:143] libmachine: Launching plugin server for driver kvm2
	I1019 16:22:24.588012  278987 main.go:143] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1019 16:22:24.588032  278987 main.go:143] libmachine: Launching plugin server for driver kvm2
	I1019 16:22:24.588199  278987 host.go:66] Checking if "addons-305823" exists ...
	I1019 16:22:24.588318  278987 main.go:143] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1019 16:22:24.588344  278987 main.go:143] libmachine: Launching plugin server for driver kvm2
	I1019 16:22:24.588617  278987 host.go:66] Checking if "addons-305823" exists ...
	I1019 16:22:24.589847  278987 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1019 16:22:24.600383  278987 main.go:143] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1019 16:22:24.600456  278987 main.go:143] libmachine: Launching plugin server for driver kvm2
	I1019 16:22:24.604406  278987 main.go:143] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1019 16:22:24.604888  278987 main.go:143] libmachine: Launching plugin server for driver kvm2
	I1019 16:22:24.604506  278987 main.go:143] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1019 16:22:24.605202  278987 main.go:143] libmachine: Launching plugin server for driver kvm2
	I1019 16:22:24.604685  278987 main.go:143] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1019 16:22:24.605446  278987 main.go:143] libmachine: Launching plugin server for driver kvm2
	I1019 16:22:24.604762  278987 main.go:143] libmachine: Plugin server listening at address 127.0.0.1:45141
	I1019 16:22:24.607313  278987 main.go:143] libmachine: () Calling .GetVersion
	I1019 16:22:24.609729  278987 main.go:143] libmachine: Using API Version  1
	I1019 16:22:24.609761  278987 main.go:143] libmachine: () Calling .SetConfigRaw
	I1019 16:22:24.610334  278987 main.go:143] libmachine: () Calling .GetMachineName
	I1019 16:22:24.610505  278987 main.go:143] libmachine: (addons-305823) Calling .GetState
	I1019 16:22:24.613105  278987 main.go:143] libmachine: Plugin server listening at address 127.0.0.1:34391
	I1019 16:22:24.613781  278987 main.go:143] libmachine: Plugin server listening at address 127.0.0.1:43407
	I1019 16:22:24.614207  278987 main.go:143] libmachine: Plugin server listening at address 127.0.0.1:38449
	I1019 16:22:24.614411  278987 main.go:143] libmachine: () Calling .GetVersion
	I1019 16:22:24.614902  278987 host.go:66] Checking if "addons-305823" exists ...
	I1019 16:22:24.615607  278987 main.go:143] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1019 16:22:24.615689  278987 main.go:143] libmachine: Launching plugin server for driver kvm2
	I1019 16:22:24.616107  278987 main.go:143] libmachine: () Calling .GetVersion
	I1019 16:22:24.616663  278987 main.go:143] libmachine: Using API Version  1
	I1019 16:22:24.616698  278987 main.go:143] libmachine: Using API Version  1
	I1019 16:22:24.616712  278987 main.go:143] libmachine: () Calling .SetConfigRaw
	I1019 16:22:24.616722  278987 main.go:143] libmachine: () Calling .SetConfigRaw
	I1019 16:22:24.617382  278987 main.go:143] libmachine: () Calling .GetMachineName
	I1019 16:22:24.617748  278987 main.go:143] libmachine: () Calling .GetVersion
	I1019 16:22:24.617998  278987 main.go:143] libmachine: (addons-305823) Calling .GetState
	I1019 16:22:24.618699  278987 main.go:143] libmachine: Using API Version  1
	I1019 16:22:24.618716  278987 main.go:143] libmachine: () Calling .SetConfigRaw
	I1019 16:22:24.619212  278987 main.go:143] libmachine: () Calling .GetMachineName
	I1019 16:22:24.619304  278987 main.go:143] libmachine: () Calling .GetMachineName
	I1019 16:22:24.620567  278987 main.go:143] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1019 16:22:24.620610  278987 main.go:143] libmachine: Launching plugin server for driver kvm2
	I1019 16:22:24.620914  278987 main.go:143] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1019 16:22:24.621038  278987 main.go:143] libmachine: Launching plugin server for driver kvm2
	I1019 16:22:24.625626  278987 addons.go:239] Setting addon storage-provisioner-rancher=true in "addons-305823"
	I1019 16:22:24.625683  278987 host.go:66] Checking if "addons-305823" exists ...
	I1019 16:22:24.626098  278987 main.go:143] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1019 16:22:24.626138  278987 main.go:143] libmachine: Launching plugin server for driver kvm2
	I1019 16:22:24.635622  278987 main.go:143] libmachine: Plugin server listening at address 127.0.0.1:37509
	I1019 16:22:24.640573  278987 main.go:143] libmachine: () Calling .GetVersion
	I1019 16:22:24.641525  278987 main.go:143] libmachine: Using API Version  1
	I1019 16:22:24.641549  278987 main.go:143] libmachine: () Calling .SetConfigRaw
	I1019 16:22:24.641715  278987 main.go:143] libmachine: Plugin server listening at address 127.0.0.1:45881
	I1019 16:22:24.641934  278987 main.go:143] libmachine: () Calling .GetMachineName
	I1019 16:22:24.644504  278987 main.go:143] libmachine: () Calling .GetVersion
	I1019 16:22:24.645075  278987 main.go:143] libmachine: Using API Version  1
	I1019 16:22:24.645098  278987 main.go:143] libmachine: () Calling .SetConfigRaw
	I1019 16:22:24.645512  278987 main.go:143] libmachine: Plugin server listening at address 127.0.0.1:33553
	I1019 16:22:24.645543  278987 main.go:143] libmachine: () Calling .GetMachineName
	I1019 16:22:24.646061  278987 main.go:143] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1019 16:22:24.646103  278987 main.go:143] libmachine: Launching plugin server for driver kvm2
	I1019 16:22:24.646459  278987 main.go:143] libmachine: () Calling .GetVersion
	I1019 16:22:24.647125  278987 main.go:143] libmachine: Using API Version  1
	I1019 16:22:24.647201  278987 main.go:143] libmachine: () Calling .SetConfigRaw
	I1019 16:22:24.647766  278987 main.go:143] libmachine: () Calling .GetMachineName
	I1019 16:22:24.648530  278987 main.go:143] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1019 16:22:24.648681  278987 main.go:143] libmachine: Launching plugin server for driver kvm2
	I1019 16:22:24.649092  278987 main.go:143] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1019 16:22:24.649218  278987 main.go:143] libmachine: Launching plugin server for driver kvm2
	I1019 16:22:24.649239  278987 main.go:143] libmachine: Plugin server listening at address 127.0.0.1:34065
	I1019 16:22:24.652066  278987 main.go:143] libmachine: Plugin server listening at address 127.0.0.1:40775
	I1019 16:22:24.652084  278987 main.go:143] libmachine: Plugin server listening at address 127.0.0.1:38513
	I1019 16:22:24.652782  278987 main.go:143] libmachine: () Calling .GetVersion
	I1019 16:22:24.653502  278987 main.go:143] libmachine: Using API Version  1
	I1019 16:22:24.653551  278987 main.go:143] libmachine: () Calling .SetConfigRaw
	I1019 16:22:24.654182  278987 main.go:143] libmachine: () Calling .GetVersion
	I1019 16:22:24.654368  278987 main.go:143] libmachine: () Calling .GetMachineName
	I1019 16:22:24.654686  278987 main.go:143] libmachine: (addons-305823) Calling .GetState
	I1019 16:22:24.655260  278987 main.go:143] libmachine: () Calling .GetVersion
	I1019 16:22:24.655806  278987 main.go:143] libmachine: Using API Version  1
	I1019 16:22:24.655825  278987 main.go:143] libmachine: () Calling .SetConfigRaw
	I1019 16:22:24.656451  278987 main.go:143] libmachine: () Calling .GetMachineName
	I1019 16:22:24.656604  278987 main.go:143] libmachine: Using API Version  1
	I1019 16:22:24.656623  278987 main.go:143] libmachine: () Calling .SetConfigRaw
	I1019 16:22:24.656702  278987 main.go:143] libmachine: Plugin server listening at address 127.0.0.1:45027
	I1019 16:22:24.657050  278987 main.go:143] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1019 16:22:24.657158  278987 main.go:143] libmachine: Launching plugin server for driver kvm2
	I1019 16:22:24.657191  278987 main.go:143] libmachine: () Calling .GetMachineName
	I1019 16:22:24.657321  278987 main.go:143] libmachine: () Calling .GetVersion
	I1019 16:22:24.657837  278987 main.go:143] libmachine: (addons-305823) Calling .GetState
	I1019 16:22:24.658233  278987 main.go:143] libmachine: Using API Version  1
	I1019 16:22:24.658252  278987 main.go:143] libmachine: () Calling .SetConfigRaw
	I1019 16:22:24.658779  278987 main.go:143] libmachine: () Calling .GetMachineName
	I1019 16:22:24.659600  278987 main.go:143] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1019 16:22:24.659645  278987 main.go:143] libmachine: Launching plugin server for driver kvm2
	I1019 16:22:24.661874  278987 main.go:143] libmachine: (addons-305823) Calling .DriverName
	I1019 16:22:24.666115  278987 main.go:143] libmachine: Plugin server listening at address 127.0.0.1:33923
	I1019 16:22:24.667115  278987 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1019 16:22:24.668194  278987 addons.go:436] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1019 16:22:24.668214  278987 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1019 16:22:24.668247  278987 main.go:143] libmachine: (addons-305823) Calling .GetSSHHostname
	I1019 16:22:24.670213  278987 main.go:143] libmachine: () Calling .GetVersion
	I1019 16:22:24.670233  278987 main.go:143] libmachine: Plugin server listening at address 127.0.0.1:43331
	I1019 16:22:24.670352  278987 main.go:143] libmachine: (addons-305823) Calling .DriverName
	I1019 16:22:24.670417  278987 main.go:143] libmachine: Plugin server listening at address 127.0.0.1:34845
	I1019 16:22:24.670846  278987 main.go:143] libmachine: () Calling .GetVersion
	I1019 16:22:24.671216  278987 main.go:143] libmachine: Using API Version  1
	I1019 16:22:24.671233  278987 main.go:143] libmachine: () Calling .SetConfigRaw
	I1019 16:22:24.671279  278987 main.go:143] libmachine: Using API Version  1
	I1019 16:22:24.671295  278987 main.go:143] libmachine: () Calling .SetConfigRaw
	I1019 16:22:24.671666  278987 main.go:143] libmachine: () Calling .GetMachineName
	I1019 16:22:24.671723  278987 main.go:143] libmachine: () Calling .GetMachineName
	I1019 16:22:24.672502  278987 main.go:143] libmachine: () Calling .GetVersion
	I1019 16:22:24.672634  278987 main.go:143] libmachine: (addons-305823) Calling .GetState
	I1019 16:22:24.673005  278987 main.go:143] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1019 16:22:24.673044  278987 main.go:143] libmachine: Launching plugin server for driver kvm2
	I1019 16:22:24.673447  278987 main.go:143] libmachine: Using API Version  1
	I1019 16:22:24.673466  278987 main.go:143] libmachine: () Calling .SetConfigRaw
	I1019 16:22:24.674024  278987 main.go:143] libmachine: () Calling .GetMachineName
	I1019 16:22:24.674313  278987 main.go:143] libmachine: (addons-305823) Calling .DriverName
	I1019 16:22:24.674640  278987 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1019 16:22:24.674754  278987 main.go:143] libmachine: Plugin server listening at address 127.0.0.1:38427
	I1019 16:22:24.675790  278987 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1019 16:22:24.675813  278987 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1019 16:22:24.675834  278987 main.go:143] libmachine: (addons-305823) Calling .GetSSHHostname
	I1019 16:22:24.675842  278987 main.go:143] libmachine: Plugin server listening at address 127.0.0.1:46067
	I1019 16:22:24.676761  278987 main.go:143] libmachine: () Calling .GetVersion
	I1019 16:22:24.677630  278987 main.go:143] libmachine: Using API Version  1
	I1019 16:22:24.677650  278987 main.go:143] libmachine: () Calling .SetConfigRaw
	I1019 16:22:24.677859  278987 main.go:143] libmachine: () Calling .GetVersion
	I1019 16:22:24.678122  278987 main.go:143] libmachine: () Calling .GetMachineName
	I1019 16:22:24.678524  278987 main.go:143] libmachine: Using API Version  1
	I1019 16:22:24.678558  278987 main.go:143] libmachine: () Calling .SetConfigRaw
	I1019 16:22:24.678833  278987 addons.go:239] Setting addon default-storageclass=true in "addons-305823"
	I1019 16:22:24.678888  278987 host.go:66] Checking if "addons-305823" exists ...
	I1019 16:22:24.679032  278987 main.go:143] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1019 16:22:24.679072  278987 main.go:143] libmachine: Launching plugin server for driver kvm2
	I1019 16:22:24.679336  278987 main.go:143] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1019 16:22:24.679443  278987 main.go:143] libmachine: Launching plugin server for driver kvm2
	I1019 16:22:24.679819  278987 main.go:143] libmachine: () Calling .GetMachineName
	I1019 16:22:24.684129  278987 main.go:143] libmachine: Plugin server listening at address 127.0.0.1:34765
	I1019 16:22:24.684200  278987 main.go:143] libmachine: Plugin server listening at address 127.0.0.1:46751
	I1019 16:22:24.684607  278987 main.go:143] libmachine: (addons-305823) DBG | domain addons-305823 has defined MAC address 52:54:00:48:d4:0b in network mk-addons-305823
	I1019 16:22:24.685299  278987 main.go:143] libmachine: (addons-305823) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:d4:0b", ip: ""} in network mk-addons-305823: {Iface:virbr1 ExpiryTime:2025-10-19 17:21:58 +0000 UTC Type:0 Mac:52:54:00:48:d4:0b Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:addons-305823 Clientid:01:52:54:00:48:d4:0b}
	I1019 16:22:24.685457  278987 main.go:143] libmachine: (addons-305823) DBG | domain addons-305823 has defined IP address 192.168.39.11 and MAC address 52:54:00:48:d4:0b in network mk-addons-305823
	I1019 16:22:24.685631  278987 main.go:143] libmachine: (addons-305823) Calling .GetSSHPort
	I1019 16:22:24.686392  278987 main.go:143] libmachine: Plugin server listening at address 127.0.0.1:35839
	I1019 16:22:24.686684  278987 main.go:143] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1019 16:22:24.686736  278987 main.go:143] libmachine: Launching plugin server for driver kvm2
	I1019 16:22:24.686903  278987 main.go:143] libmachine: () Calling .GetVersion
	I1019 16:22:24.687029  278987 main.go:143] libmachine: (addons-305823) DBG | domain addons-305823 has defined MAC address 52:54:00:48:d4:0b in network mk-addons-305823
	I1019 16:22:24.687599  278987 main.go:143] libmachine: Using API Version  1
	I1019 16:22:24.687732  278987 main.go:143] libmachine: () Calling .SetConfigRaw
	I1019 16:22:24.687689  278987 main.go:143] libmachine: (addons-305823) Calling .GetSSHKeyPath
	I1019 16:22:24.688067  278987 main.go:143] libmachine: (addons-305823) Calling .GetSSHUsername
	I1019 16:22:24.688312  278987 sshutil.go:53] new ssh client: &{IP:192.168.39.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21683-274250/.minikube/machines/addons-305823/id_rsa Username:docker}
	I1019 16:22:24.688727  278987 main.go:143] libmachine: (addons-305823) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:d4:0b", ip: ""} in network mk-addons-305823: {Iface:virbr1 ExpiryTime:2025-10-19 17:21:58 +0000 UTC Type:0 Mac:52:54:00:48:d4:0b Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:addons-305823 Clientid:01:52:54:00:48:d4:0b}
	I1019 16:22:24.688751  278987 main.go:143] libmachine: (addons-305823) DBG | domain addons-305823 has defined IP address 192.168.39.11 and MAC address 52:54:00:48:d4:0b in network mk-addons-305823
	I1019 16:22:24.688921  278987 main.go:143] libmachine: (addons-305823) Calling .GetSSHPort
	I1019 16:22:24.689656  278987 main.go:143] libmachine: () Calling .GetMachineName
	I1019 16:22:24.689724  278987 main.go:143] libmachine: (addons-305823) Calling .GetSSHKeyPath
	I1019 16:22:24.689938  278987 main.go:143] libmachine: (addons-305823) Calling .GetSSHUsername
	I1019 16:22:24.690189  278987 sshutil.go:53] new ssh client: &{IP:192.168.39.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21683-274250/.minikube/machines/addons-305823/id_rsa Username:docker}
	I1019 16:22:24.691464  278987 main.go:143] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1019 16:22:24.691513  278987 main.go:143] libmachine: Launching plugin server for driver kvm2
	I1019 16:22:24.693587  278987 main.go:143] libmachine: Plugin server listening at address 127.0.0.1:36893
	I1019 16:22:24.693882  278987 main.go:143] libmachine: Plugin server listening at address 127.0.0.1:35441
	I1019 16:22:24.694360  278987 main.go:143] libmachine: () Calling .GetVersion
	I1019 16:22:24.695407  278987 main.go:143] libmachine: () Calling .GetVersion
	I1019 16:22:24.695539  278987 main.go:143] libmachine: Using API Version  1
	I1019 16:22:24.695616  278987 main.go:143] libmachine: () Calling .GetVersion
	I1019 16:22:24.696056  278987 main.go:143] libmachine: Plugin server listening at address 127.0.0.1:35491
	I1019 16:22:24.696200  278987 main.go:143] libmachine: Using API Version  1
	I1019 16:22:24.696218  278987 main.go:143] libmachine: () Calling .SetConfigRaw
	I1019 16:22:24.696246  278987 main.go:143] libmachine: () Calling .SetConfigRaw
	I1019 16:22:24.696722  278987 main.go:143] libmachine: () Calling .GetMachineName
	I1019 16:22:24.696922  278987 main.go:143] libmachine: (addons-305823) Calling .GetState
	I1019 16:22:24.697582  278987 main.go:143] libmachine: Using API Version  1
	I1019 16:22:24.697600  278987 main.go:143] libmachine: () Calling .SetConfigRaw
	I1019 16:22:24.697672  278987 main.go:143] libmachine: () Calling .GetVersion
	I1019 16:22:24.698207  278987 main.go:143] libmachine: () Calling .GetMachineName
	I1019 16:22:24.698964  278987 main.go:143] libmachine: () Calling .GetVersion
	I1019 16:22:24.699080  278987 main.go:143] libmachine: Using API Version  1
	I1019 16:22:24.699097  278987 main.go:143] libmachine: () Calling .SetConfigRaw
	I1019 16:22:24.699193  278987 main.go:143] libmachine: () Calling .GetMachineName
	I1019 16:22:24.699896  278987 main.go:143] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1019 16:22:24.699945  278987 main.go:143] libmachine: Launching plugin server for driver kvm2
	I1019 16:22:24.701068  278987 main.go:143] libmachine: () Calling .GetMachineName
	I1019 16:22:24.701231  278987 main.go:143] libmachine: Using API Version  1
	I1019 16:22:24.701245  278987 main.go:143] libmachine: () Calling .SetConfigRaw
	I1019 16:22:24.701810  278987 main.go:143] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1019 16:22:24.701844  278987 main.go:143] libmachine: Launching plugin server for driver kvm2
	I1019 16:22:24.702169  278987 main.go:143] libmachine: Plugin server listening at address 127.0.0.1:45417
	I1019 16:22:24.702306  278987 main.go:143] libmachine: Plugin server listening at address 127.0.0.1:39159
	I1019 16:22:24.702465  278987 main.go:143] libmachine: (addons-305823) Calling .GetState
	I1019 16:22:24.704357  278987 main.go:143] libmachine: () Calling .GetMachineName
	I1019 16:22:24.704436  278987 main.go:143] libmachine: () Calling .GetVersion
	I1019 16:22:24.704968  278987 main.go:143] libmachine: Using API Version  1
	I1019 16:22:24.705000  278987 main.go:143] libmachine: () Calling .SetConfigRaw
	I1019 16:22:24.705590  278987 main.go:143] libmachine: (addons-305823) Calling .DriverName
	I1019 16:22:24.705632  278987 main.go:143] libmachine: () Calling .GetMachineName
	I1019 16:22:24.705652  278987 main.go:143] libmachine: (addons-305823) Calling .DriverName
	I1019 16:22:24.706392  278987 main.go:143] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1019 16:22:24.705538  278987 main.go:143] libmachine: (addons-305823) Calling .GetState
	I1019 16:22:24.706447  278987 main.go:143] libmachine: Launching plugin server for driver kvm2
	I1019 16:22:24.706807  278987 main.go:143] libmachine: Plugin server listening at address 127.0.0.1:34901
	I1019 16:22:24.707592  278987 main.go:143] libmachine: () Calling .GetVersion
	I1019 16:22:24.707632  278987 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.13.3
	I1019 16:22:24.707703  278987 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.45.0
	I1019 16:22:24.708577  278987 main.go:143] libmachine: Using API Version  1
	I1019 16:22:24.708602  278987 main.go:143] libmachine: () Calling .SetConfigRaw
	I1019 16:22:24.708392  278987 main.go:143] libmachine: Plugin server listening at address 127.0.0.1:33533
	I1019 16:22:24.708764  278987 addons.go:436] installing /etc/kubernetes/addons/ig-crd.yaml
	I1019 16:22:24.708777  278987 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (14 bytes)
	I1019 16:22:24.708795  278987 main.go:143] libmachine: (addons-305823) Calling .GetSSHHostname
	I1019 16:22:24.709395  278987 main.go:143] libmachine: () Calling .GetMachineName
	I1019 16:22:24.709746  278987 main.go:143] libmachine: (addons-305823) Calling .GetState
	I1019 16:22:24.710087  278987 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1019 16:22:24.710285  278987 main.go:143] libmachine: () Calling .GetVersion
	I1019 16:22:24.710621  278987 main.go:143] libmachine: () Calling .GetVersion
	I1019 16:22:24.711285  278987 main.go:143] libmachine: Using API Version  1
	I1019 16:22:24.711300  278987 main.go:143] libmachine: () Calling .SetConfigRaw
	I1019 16:22:24.711801  278987 main.go:143] libmachine: () Calling .GetMachineName
	I1019 16:22:24.712067  278987 main.go:143] libmachine: (addons-305823) Calling .GetState
	I1019 16:22:24.713111  278987 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1019 16:22:24.714275  278987 addons.go:436] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1019 16:22:24.714297  278987 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1019 16:22:24.714317  278987 main.go:143] libmachine: (addons-305823) Calling .GetSSHHostname
	I1019 16:22:24.715623  278987 main.go:143] libmachine: Using API Version  1
	I1019 16:22:24.715645  278987 main.go:143] libmachine: () Calling .SetConfigRaw
	I1019 16:22:24.716157  278987 main.go:143] libmachine: () Calling .GetMachineName
	I1019 16:22:24.716261  278987 main.go:143] libmachine: (addons-305823) Calling .DriverName
	I1019 16:22:24.716421  278987 main.go:143] libmachine: (addons-305823) Calling .GetState
	I1019 16:22:24.717826  278987 main.go:143] libmachine: (addons-305823) DBG | domain addons-305823 has defined MAC address 52:54:00:48:d4:0b in network mk-addons-305823
	I1019 16:22:24.718582  278987 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.42
	I1019 16:22:24.718785  278987 main.go:143] libmachine: (addons-305823) Calling .DriverName
	I1019 16:22:24.719232  278987 main.go:143] libmachine: Plugin server listening at address 127.0.0.1:41165
	I1019 16:22:24.720053  278987 main.go:143] libmachine: () Calling .GetVersion
	I1019 16:22:24.720816  278987 main.go:143] libmachine: Using API Version  1
	I1019 16:22:24.720832  278987 main.go:143] libmachine: () Calling .SetConfigRaw
	I1019 16:22:24.721720  278987 main.go:143] libmachine: Plugin server listening at address 127.0.0.1:43833
	I1019 16:22:24.721727  278987 main.go:143] libmachine: (addons-305823) Calling .DriverName
	I1019 16:22:24.721815  278987 main.go:143] libmachine: () Calling .GetMachineName
	I1019 16:22:24.722081  278987 main.go:143] libmachine: (addons-305823) Calling .GetSSHPort
	I1019 16:22:24.722171  278987 main.go:143] libmachine: (addons-305823) Calling .DriverName
	I1019 16:22:24.722243  278987 main.go:143] libmachine: Plugin server listening at address 127.0.0.1:35169
	I1019 16:22:24.722396  278987 main.go:143] libmachine: (addons-305823) Calling .GetSSHKeyPath
	I1019 16:22:24.722576  278987 main.go:143] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1019 16:22:24.722614  278987 main.go:143] libmachine: Launching plugin server for driver kvm2
	I1019 16:22:24.722896  278987 main.go:143] libmachine: () Calling .GetVersion
	I1019 16:22:24.723703  278987 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1019 16:22:24.723452  278987 main.go:143] libmachine: (addons-305823) Calling .GetSSHUsername
	I1019 16:22:24.723654  278987 addons.go:436] installing /etc/kubernetes/addons/deployment.yaml
	I1019 16:22:24.723856  278987 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1019 16:22:24.723882  278987 main.go:143] libmachine: (addons-305823) Calling .GetSSHHostname
	I1019 16:22:24.723712  278987 main.go:143] libmachine: Using API Version  1
	I1019 16:22:24.723911  278987 main.go:143] libmachine: () Calling .SetConfigRaw
	I1019 16:22:24.723943  278987 sshutil.go:53] new ssh client: &{IP:192.168.39.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21683-274250/.minikube/machines/addons-305823/id_rsa Username:docker}
	I1019 16:22:24.724610  278987 main.go:143] libmachine: () Calling .GetVersion
	I1019 16:22:24.724689  278987 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1019 16:22:24.724748  278987 addons.go:436] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1019 16:22:24.724759  278987 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1019 16:22:24.724777  278987 main.go:143] libmachine: (addons-305823) Calling .GetSSHHostname
	I1019 16:22:24.725110  278987 main.go:143] libmachine: (addons-305823) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:d4:0b", ip: ""} in network mk-addons-305823: {Iface:virbr1 ExpiryTime:2025-10-19 17:21:58 +0000 UTC Type:0 Mac:52:54:00:48:d4:0b Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:addons-305823 Clientid:01:52:54:00:48:d4:0b}
	I1019 16:22:24.725226  278987 main.go:143] libmachine: (addons-305823) DBG | domain addons-305823 has defined IP address 192.168.39.11 and MAC address 52:54:00:48:d4:0b in network mk-addons-305823
	I1019 16:22:24.725573  278987 main.go:143] libmachine: () Calling .GetMachineName
	I1019 16:22:24.725811  278987 main.go:143] libmachine: (addons-305823) Calling .GetState
	I1019 16:22:24.725885  278987 main.go:143] libmachine: (addons-305823) DBG | domain addons-305823 has defined MAC address 52:54:00:48:d4:0b in network mk-addons-305823
	I1019 16:22:24.726596  278987 main.go:143] libmachine: Using API Version  1
	I1019 16:22:24.726649  278987 main.go:143] libmachine: () Calling .SetConfigRaw
	I1019 16:22:24.727665  278987 main.go:143] libmachine: () Calling .GetMachineName
	I1019 16:22:24.728345  278987 main.go:143] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1019 16:22:24.728392  278987 main.go:143] libmachine: Launching plugin server for driver kvm2
	I1019 16:22:24.728565  278987 main.go:143] libmachine: (addons-305823) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:d4:0b", ip: ""} in network mk-addons-305823: {Iface:virbr1 ExpiryTime:2025-10-19 17:21:58 +0000 UTC Type:0 Mac:52:54:00:48:d4:0b Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:addons-305823 Clientid:01:52:54:00:48:d4:0b}
	I1019 16:22:24.728604  278987 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1019 16:22:24.729669  278987 main.go:143] libmachine: (addons-305823) DBG | domain addons-305823 has defined IP address 192.168.39.11 and MAC address 52:54:00:48:d4:0b in network mk-addons-305823
	I1019 16:22:24.729752  278987 main.go:143] libmachine: (addons-305823) Calling .GetSSHPort
	I1019 16:22:24.729855  278987 addons.go:436] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1019 16:22:24.729873  278987 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1019 16:22:24.729890  278987 main.go:143] libmachine: (addons-305823) Calling .GetSSHHostname
	I1019 16:22:24.729789  278987 out.go:179]   - Using image docker.io/registry:3.0.0
	I1019 16:22:24.730185  278987 main.go:143] libmachine: (addons-305823) Calling .GetSSHKeyPath
	I1019 16:22:24.730544  278987 main.go:143] libmachine: (addons-305823) Calling .GetSSHUsername
	I1019 16:22:24.730832  278987 sshutil.go:53] new ssh client: &{IP:192.168.39.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21683-274250/.minikube/machines/addons-305823/id_rsa Username:docker}
	I1019 16:22:24.731199  278987 addons.go:436] installing /etc/kubernetes/addons/registry-rc.yaml
	I1019 16:22:24.731216  278987 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1019 16:22:24.731241  278987 main.go:143] libmachine: (addons-305823) Calling .GetSSHHostname
	I1019 16:22:24.731241  278987 main.go:143] libmachine: (addons-305823) Calling .DriverName
	I1019 16:22:24.732480  278987 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1019 16:22:24.732975  278987 main.go:143] libmachine: (addons-305823) DBG | domain addons-305823 has defined MAC address 52:54:00:48:d4:0b in network mk-addons-305823
	I1019 16:22:24.733349  278987 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1019 16:22:24.733367  278987 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1019 16:22:24.733385  278987 main.go:143] libmachine: (addons-305823) Calling .GetSSHHostname
	I1019 16:22:24.734612  278987 main.go:143] libmachine: (addons-305823) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:d4:0b", ip: ""} in network mk-addons-305823: {Iface:virbr1 ExpiryTime:2025-10-19 17:21:58 +0000 UTC Type:0 Mac:52:54:00:48:d4:0b Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:addons-305823 Clientid:01:52:54:00:48:d4:0b}
	I1019 16:22:24.734817  278987 main.go:143] libmachine: (addons-305823) DBG | domain addons-305823 has defined IP address 192.168.39.11 and MAC address 52:54:00:48:d4:0b in network mk-addons-305823
	I1019 16:22:24.734854  278987 main.go:143] libmachine: Plugin server listening at address 127.0.0.1:37913
	I1019 16:22:24.736735  278987 main.go:143] libmachine: Plugin server listening at address 127.0.0.1:33625
	I1019 16:22:24.736860  278987 main.go:143] libmachine: (addons-305823) Calling .GetSSHPort
	I1019 16:22:24.736915  278987 main.go:143] libmachine: (addons-305823) DBG | domain addons-305823 has defined MAC address 52:54:00:48:d4:0b in network mk-addons-305823
	I1019 16:22:24.737068  278987 main.go:143] libmachine: Plugin server listening at address 127.0.0.1:35655
	I1019 16:22:24.737090  278987 main.go:143] libmachine: (addons-305823) Calling .GetSSHKeyPath
	I1019 16:22:24.737252  278987 main.go:143] libmachine: (addons-305823) Calling .GetSSHUsername
	I1019 16:22:24.737403  278987 sshutil.go:53] new ssh client: &{IP:192.168.39.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21683-274250/.minikube/machines/addons-305823/id_rsa Username:docker}
	I1019 16:22:24.737535  278987 main.go:143] libmachine: () Calling .GetVersion
	I1019 16:22:24.737966  278987 main.go:143] libmachine: Plugin server listening at address 127.0.0.1:37011
	I1019 16:22:24.738198  278987 main.go:143] libmachine: Using API Version  1
	I1019 16:22:24.738394  278987 main.go:143] libmachine: () Calling .SetConfigRaw
	I1019 16:22:24.738238  278987 main.go:143] libmachine: () Calling .GetVersion
	I1019 16:22:24.739028  278987 main.go:143] libmachine: Using API Version  1
	I1019 16:22:24.739045  278987 main.go:143] libmachine: () Calling .SetConfigRaw
	I1019 16:22:24.739151  278987 main.go:143] libmachine: (addons-305823) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:d4:0b", ip: ""} in network mk-addons-305823: {Iface:virbr1 ExpiryTime:2025-10-19 17:21:58 +0000 UTC Type:0 Mac:52:54:00:48:d4:0b Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:addons-305823 Clientid:01:52:54:00:48:d4:0b}
	I1019 16:22:24.739189  278987 main.go:143] libmachine: (addons-305823) DBG | domain addons-305823 has defined IP address 192.168.39.11 and MAC address 52:54:00:48:d4:0b in network mk-addons-305823
	I1019 16:22:24.739153  278987 main.go:143] libmachine: (addons-305823) DBG | domain addons-305823 has defined MAC address 52:54:00:48:d4:0b in network mk-addons-305823
	I1019 16:22:24.739492  278987 main.go:143] libmachine: (addons-305823) DBG | domain addons-305823 has defined MAC address 52:54:00:48:d4:0b in network mk-addons-305823
	I1019 16:22:24.738705  278987 main.go:143] libmachine: () Calling .GetVersion
	I1019 16:22:24.739566  278987 main.go:143] libmachine: () Calling .GetMachineName
	I1019 16:22:24.739634  278987 main.go:143] libmachine: (addons-305823) Calling .GetSSHPort
	I1019 16:22:24.739785  278987 main.go:143] libmachine: (addons-305823) Calling .GetSSHKeyPath
	I1019 16:22:24.739946  278987 main.go:143] libmachine: Using API Version  1
	I1019 16:22:24.739964  278987 main.go:143] libmachine: () Calling .SetConfigRaw
	I1019 16:22:24.740250  278987 main.go:143] libmachine: (addons-305823) Calling .GetState
	I1019 16:22:24.740313  278987 main.go:143] libmachine: (addons-305823) Calling .GetSSHUsername
	I1019 16:22:24.740458  278987 main.go:143] libmachine: () Calling .GetVersion
	I1019 16:22:24.741004  278987 sshutil.go:53] new ssh client: &{IP:192.168.39.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21683-274250/.minikube/machines/addons-305823/id_rsa Username:docker}
	I1019 16:22:24.741051  278987 main.go:143] libmachine: (addons-305823) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:d4:0b", ip: ""} in network mk-addons-305823: {Iface:virbr1 ExpiryTime:2025-10-19 17:21:58 +0000 UTC Type:0 Mac:52:54:00:48:d4:0b Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:addons-305823 Clientid:01:52:54:00:48:d4:0b}
	I1019 16:22:24.741071  278987 main.go:143] libmachine: (addons-305823) DBG | domain addons-305823 has defined IP address 192.168.39.11 and MAC address 52:54:00:48:d4:0b in network mk-addons-305823
	I1019 16:22:24.741417  278987 main.go:143] libmachine: (addons-305823) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:d4:0b", ip: ""} in network mk-addons-305823: {Iface:virbr1 ExpiryTime:2025-10-19 17:21:58 +0000 UTC Type:0 Mac:52:54:00:48:d4:0b Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:addons-305823 Clientid:01:52:54:00:48:d4:0b}
	I1019 16:22:24.741449  278987 main.go:143] libmachine: (addons-305823) DBG | domain addons-305823 has defined IP address 192.168.39.11 and MAC address 52:54:00:48:d4:0b in network mk-addons-305823
	I1019 16:22:24.741687  278987 main.go:143] libmachine: Using API Version  1
	I1019 16:22:24.741703  278987 main.go:143] libmachine: () Calling .SetConfigRaw
	I1019 16:22:24.741813  278987 main.go:143] libmachine: (addons-305823) Calling .GetSSHPort
	I1019 16:22:24.741876  278987 main.go:143] libmachine: (addons-305823) Calling .GetSSHPort
	I1019 16:22:24.742381  278987 main.go:143] libmachine: (addons-305823) Calling .GetSSHKeyPath
	I1019 16:22:24.742624  278987 main.go:143] libmachine: () Calling .GetMachineName
	I1019 16:22:24.742703  278987 main.go:143] libmachine: (addons-305823) DBG | domain addons-305823 has defined MAC address 52:54:00:48:d4:0b in network mk-addons-305823
	I1019 16:22:24.742760  278987 main.go:143] libmachine: (addons-305823) Calling .GetSSHKeyPath
	I1019 16:22:24.743006  278987 main.go:143] libmachine: (addons-305823) Calling .GetState
	I1019 16:22:24.743039  278987 main.go:143] libmachine: (addons-305823) Calling .GetSSHUsername
	I1019 16:22:24.743070  278987 main.go:143] libmachine: (addons-305823) Calling .GetSSHUsername
	I1019 16:22:24.743220  278987 main.go:143] libmachine: (addons-305823) Calling .DriverName
	I1019 16:22:24.743232  278987 sshutil.go:53] new ssh client: &{IP:192.168.39.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21683-274250/.minikube/machines/addons-305823/id_rsa Username:docker}
	I1019 16:22:24.743281  278987 sshutil.go:53] new ssh client: &{IP:192.168.39.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21683-274250/.minikube/machines/addons-305823/id_rsa Username:docker}
	I1019 16:22:24.743752  278987 main.go:143] libmachine: () Calling .GetMachineName
	I1019 16:22:24.743894  278987 main.go:143] libmachine: (addons-305823) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:d4:0b", ip: ""} in network mk-addons-305823: {Iface:virbr1 ExpiryTime:2025-10-19 17:21:58 +0000 UTC Type:0 Mac:52:54:00:48:d4:0b Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:addons-305823 Clientid:01:52:54:00:48:d4:0b}
	I1019 16:22:24.743916  278987 main.go:143] libmachine: (addons-305823) DBG | domain addons-305823 has defined IP address 192.168.39.11 and MAC address 52:54:00:48:d4:0b in network mk-addons-305823
	I1019 16:22:24.743954  278987 main.go:143] libmachine: () Calling .GetMachineName
	I1019 16:22:24.744027  278987 main.go:143] libmachine: (addons-305823) Calling .GetState
	I1019 16:22:24.744307  278987 main.go:143] libmachine: (addons-305823) Calling .GetState
	I1019 16:22:24.744614  278987 main.go:143] libmachine: (addons-305823) Calling .GetSSHPort
	I1019 16:22:24.744904  278987 main.go:143] libmachine: (addons-305823) Calling .GetSSHKeyPath
	I1019 16:22:24.745114  278987 main.go:143] libmachine: (addons-305823) Calling .GetSSHUsername
	I1019 16:22:24.745308  278987 sshutil.go:53] new ssh client: &{IP:192.168.39.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21683-274250/.minikube/machines/addons-305823/id_rsa Username:docker}
	I1019 16:22:24.745608  278987 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1019 16:22:24.746796  278987 main.go:143] libmachine: Plugin server listening at address 127.0.0.1:40263
	I1019 16:22:24.747468  278987 main.go:143] libmachine: (addons-305823) Calling .DriverName
	I1019 16:22:24.747530  278987 main.go:143] libmachine: (addons-305823) Calling .DriverName
	I1019 16:22:24.747725  278987 main.go:143] libmachine: () Calling .GetVersion
	I1019 16:22:24.747900  278987 main.go:143] libmachine: (addons-305823) Calling .DriverName
	I1019 16:22:24.748082  278987 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1019 16:22:24.748153  278987 main.go:143] libmachine: Making call to close driver server
	I1019 16:22:24.748463  278987 main.go:143] libmachine: (addons-305823) Calling .Close
	I1019 16:22:24.748307  278987 main.go:143] libmachine: Using API Version  1
	I1019 16:22:24.748517  278987 main.go:143] libmachine: () Calling .SetConfigRaw
	I1019 16:22:24.748852  278987 main.go:143] libmachine: Successfully made call to close driver server
	I1019 16:22:24.748869  278987 main.go:143] libmachine: Making call to close connection to plugin binary
	I1019 16:22:24.748891  278987 main.go:143] libmachine: Making call to close driver server
	I1019 16:22:24.748902  278987 main.go:143] libmachine: (addons-305823) Calling .Close
	I1019 16:22:24.748942  278987 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1019 16:22:24.748951  278987 main.go:143] libmachine: () Calling .GetMachineName
	I1019 16:22:24.749030  278987 main.go:143] libmachine: Plugin server listening at address 127.0.0.1:46245
	I1019 16:22:24.749110  278987 main.go:143] libmachine: (addons-305823) DBG | Closing plugin on server side
	I1019 16:22:24.749336  278987 main.go:143] libmachine: (addons-305823) Calling .GetState
	I1019 16:22:24.749152  278987 main.go:143] libmachine: Successfully made call to close driver server
	I1019 16:22:24.749393  278987 main.go:143] libmachine: Making call to close connection to plugin binary
	W1019 16:22:24.749502  278987 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1019 16:22:24.749766  278987 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.17.4
	I1019 16:22:24.749824  278987 main.go:143] libmachine: () Calling .GetVersion
	I1019 16:22:24.750376  278987 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1019 16:22:24.750403  278987 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1019 16:22:24.750421  278987 main.go:143] libmachine: (addons-305823) Calling .GetSSHHostname
	I1019 16:22:24.750540  278987 main.go:143] libmachine: Using API Version  1
	I1019 16:22:24.750566  278987 main.go:143] libmachine: () Calling .SetConfigRaw
	I1019 16:22:24.750928  278987 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1019 16:22:24.750952  278987 addons.go:436] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1019 16:22:24.751020  278987 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1019 16:22:24.751050  278987 main.go:143] libmachine: (addons-305823) Calling .GetSSHHostname
	I1019 16:22:24.751228  278987 main.go:143] libmachine: () Calling .GetMachineName
	I1019 16:22:24.751428  278987 main.go:143] libmachine: (addons-305823) Calling .GetState
	I1019 16:22:24.753008  278987 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1019 16:22:24.753558  278987 main.go:143] libmachine: (addons-305823) Calling .DriverName
	I1019 16:22:24.754621  278987 main.go:143] libmachine: (addons-305823) Calling .DriverName
	I1019 16:22:24.754873  278987 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1019 16:22:24.754905  278987 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1019 16:22:24.756413  278987 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1019 16:22:24.756437  278987 addons.go:436] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1019 16:22:24.756482  278987 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1019 16:22:24.756514  278987 main.go:143] libmachine: (addons-305823) Calling .GetSSHHostname
	I1019 16:22:24.756490  278987 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1019 16:22:24.756782  278987 main.go:143] libmachine: (addons-305823) DBG | domain addons-305823 has defined MAC address 52:54:00:48:d4:0b in network mk-addons-305823
	I1019 16:22:24.757484  278987 main.go:143] libmachine: (addons-305823) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:d4:0b", ip: ""} in network mk-addons-305823: {Iface:virbr1 ExpiryTime:2025-10-19 17:21:58 +0000 UTC Type:0 Mac:52:54:00:48:d4:0b Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:addons-305823 Clientid:01:52:54:00:48:d4:0b}
	I1019 16:22:24.757585  278987 main.go:143] libmachine: (addons-305823) DBG | domain addons-305823 has defined MAC address 52:54:00:48:d4:0b in network mk-addons-305823
	I1019 16:22:24.757561  278987 main.go:143] libmachine: (addons-305823) DBG | domain addons-305823 has defined IP address 192.168.39.11 and MAC address 52:54:00:48:d4:0b in network mk-addons-305823
	I1019 16:22:24.758212  278987 main.go:143] libmachine: (addons-305823) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:d4:0b", ip: ""} in network mk-addons-305823: {Iface:virbr1 ExpiryTime:2025-10-19 17:21:58 +0000 UTC Type:0 Mac:52:54:00:48:d4:0b Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:addons-305823 Clientid:01:52:54:00:48:d4:0b}
	I1019 16:22:24.758217  278987 main.go:143] libmachine: (addons-305823) Calling .GetSSHPort
	I1019 16:22:24.758235  278987 main.go:143] libmachine: (addons-305823) DBG | domain addons-305823 has defined IP address 192.168.39.11 and MAC address 52:54:00:48:d4:0b in network mk-addons-305823
	I1019 16:22:24.758554  278987 out.go:179]   - Using image docker.io/busybox:stable
	I1019 16:22:24.758649  278987 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1019 16:22:24.758617  278987 main.go:143] libmachine: (addons-305823) Calling .GetSSHPort
	I1019 16:22:24.758640  278987 main.go:143] libmachine: (addons-305823) Calling .GetSSHKeyPath
	I1019 16:22:24.758895  278987 main.go:143] libmachine: (addons-305823) Calling .GetSSHKeyPath
	I1019 16:22:24.758972  278987 main.go:143] libmachine: (addons-305823) Calling .GetSSHUsername
	I1019 16:22:24.759138  278987 main.go:143] libmachine: (addons-305823) Calling .GetSSHUsername
	I1019 16:22:24.759272  278987 sshutil.go:53] new ssh client: &{IP:192.168.39.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21683-274250/.minikube/machines/addons-305823/id_rsa Username:docker}
	I1019 16:22:24.759299  278987 sshutil.go:53] new ssh client: &{IP:192.168.39.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21683-274250/.minikube/machines/addons-305823/id_rsa Username:docker}
	I1019 16:22:24.760017  278987 main.go:143] libmachine: Plugin server listening at address 127.0.0.1:44247
	I1019 16:22:24.760072  278987 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1019 16:22:24.760095  278987 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1019 16:22:24.760114  278987 main.go:143] libmachine: (addons-305823) Calling .GetSSHHostname
	I1019 16:22:24.760550  278987 main.go:143] libmachine: () Calling .GetVersion
	I1019 16:22:24.761036  278987 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1019 16:22:24.761040  278987 main.go:143] libmachine: Using API Version  1
	I1019 16:22:24.761096  278987 main.go:143] libmachine: () Calling .SetConfigRaw
	I1019 16:22:24.761274  278987 main.go:143] libmachine: (addons-305823) DBG | domain addons-305823 has defined MAC address 52:54:00:48:d4:0b in network mk-addons-305823
	I1019 16:22:24.761517  278987 main.go:143] libmachine: () Calling .GetMachineName
	I1019 16:22:24.761753  278987 main.go:143] libmachine: (addons-305823) Calling .GetState
	I1019 16:22:24.761828  278987 main.go:143] libmachine: (addons-305823) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:d4:0b", ip: ""} in network mk-addons-305823: {Iface:virbr1 ExpiryTime:2025-10-19 17:21:58 +0000 UTC Type:0 Mac:52:54:00:48:d4:0b Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:addons-305823 Clientid:01:52:54:00:48:d4:0b}
	I1019 16:22:24.761900  278987 main.go:143] libmachine: (addons-305823) DBG | domain addons-305823 has defined IP address 192.168.39.11 and MAC address 52:54:00:48:d4:0b in network mk-addons-305823
	I1019 16:22:24.762055  278987 main.go:143] libmachine: (addons-305823) Calling .GetSSHPort
	I1019 16:22:24.762204  278987 main.go:143] libmachine: (addons-305823) Calling .GetSSHKeyPath
	I1019 16:22:24.762353  278987 main.go:143] libmachine: (addons-305823) Calling .GetSSHUsername
	I1019 16:22:24.762449  278987 addons.go:436] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1019 16:22:24.762464  278987 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1019 16:22:24.762488  278987 main.go:143] libmachine: (addons-305823) Calling .GetSSHHostname
	I1019 16:22:24.762492  278987 sshutil.go:53] new ssh client: &{IP:192.168.39.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21683-274250/.minikube/machines/addons-305823/id_rsa Username:docker}
	I1019 16:22:24.764447  278987 main.go:143] libmachine: (addons-305823) Calling .DriverName
	I1019 16:22:24.764759  278987 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1019 16:22:24.764777  278987 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1019 16:22:24.764791  278987 main.go:143] libmachine: (addons-305823) Calling .GetSSHHostname
	I1019 16:22:24.765636  278987 main.go:143] libmachine: (addons-305823) DBG | domain addons-305823 has defined MAC address 52:54:00:48:d4:0b in network mk-addons-305823
	I1019 16:22:24.766335  278987 main.go:143] libmachine: (addons-305823) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:d4:0b", ip: ""} in network mk-addons-305823: {Iface:virbr1 ExpiryTime:2025-10-19 17:21:58 +0000 UTC Type:0 Mac:52:54:00:48:d4:0b Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:addons-305823 Clientid:01:52:54:00:48:d4:0b}
	I1019 16:22:24.766363  278987 main.go:143] libmachine: (addons-305823) DBG | domain addons-305823 has defined IP address 192.168.39.11 and MAC address 52:54:00:48:d4:0b in network mk-addons-305823
	I1019 16:22:24.766571  278987 main.go:143] libmachine: (addons-305823) Calling .GetSSHPort
	I1019 16:22:24.766760  278987 main.go:143] libmachine: (addons-305823) Calling .GetSSHKeyPath
	I1019 16:22:24.766915  278987 main.go:143] libmachine: (addons-305823) Calling .GetSSHUsername
	I1019 16:22:24.767091  278987 sshutil.go:53] new ssh client: &{IP:192.168.39.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21683-274250/.minikube/machines/addons-305823/id_rsa Username:docker}
	I1019 16:22:24.768089  278987 main.go:143] libmachine: (addons-305823) DBG | domain addons-305823 has defined MAC address 52:54:00:48:d4:0b in network mk-addons-305823
	I1019 16:22:24.768582  278987 main.go:143] libmachine: (addons-305823) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:d4:0b", ip: ""} in network mk-addons-305823: {Iface:virbr1 ExpiryTime:2025-10-19 17:21:58 +0000 UTC Type:0 Mac:52:54:00:48:d4:0b Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:addons-305823 Clientid:01:52:54:00:48:d4:0b}
	I1019 16:22:24.768610  278987 main.go:143] libmachine: (addons-305823) DBG | domain addons-305823 has defined IP address 192.168.39.11 and MAC address 52:54:00:48:d4:0b in network mk-addons-305823
	I1019 16:22:24.768793  278987 main.go:143] libmachine: (addons-305823) Calling .GetSSHPort
	I1019 16:22:24.768822  278987 main.go:143] libmachine: (addons-305823) DBG | domain addons-305823 has defined MAC address 52:54:00:48:d4:0b in network mk-addons-305823
	I1019 16:22:24.768992  278987 main.go:143] libmachine: (addons-305823) Calling .GetSSHKeyPath
	I1019 16:22:24.769177  278987 main.go:143] libmachine: (addons-305823) Calling .GetSSHUsername
	I1019 16:22:24.769324  278987 main.go:143] libmachine: (addons-305823) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:d4:0b", ip: ""} in network mk-addons-305823: {Iface:virbr1 ExpiryTime:2025-10-19 17:21:58 +0000 UTC Type:0 Mac:52:54:00:48:d4:0b Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:addons-305823 Clientid:01:52:54:00:48:d4:0b}
	I1019 16:22:24.769315  278987 sshutil.go:53] new ssh client: &{IP:192.168.39.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21683-274250/.minikube/machines/addons-305823/id_rsa Username:docker}
	I1019 16:22:24.769349  278987 main.go:143] libmachine: (addons-305823) DBG | domain addons-305823 has defined IP address 192.168.39.11 and MAC address 52:54:00:48:d4:0b in network mk-addons-305823
	I1019 16:22:24.769561  278987 main.go:143] libmachine: (addons-305823) Calling .GetSSHPort
	I1019 16:22:24.769788  278987 main.go:143] libmachine: (addons-305823) Calling .GetSSHKeyPath
	I1019 16:22:24.769949  278987 main.go:143] libmachine: (addons-305823) Calling .GetSSHUsername
	I1019 16:22:24.770119  278987 sshutil.go:53] new ssh client: &{IP:192.168.39.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21683-274250/.minikube/machines/addons-305823/id_rsa Username:docker}
	W1019 16:22:25.007193  278987 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:36338->192.168.39.11:22: read: connection reset by peer
	I1019 16:22:25.007254  278987 retry.go:31] will retry after 222.577477ms: ssh: handshake failed: read tcp 192.168.39.1:36338->192.168.39.11:22: read: connection reset by peer
	W1019 16:22:25.034772  278987 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:36358->192.168.39.11:22: read: connection reset by peer
	I1019 16:22:25.034810  278987 retry.go:31] will retry after 217.062954ms: ssh: handshake failed: read tcp 192.168.39.1:36358->192.168.39.11:22: read: connection reset by peer
	I1019 16:22:25.426831  278987 addons.go:436] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1019 16:22:25.426854  278987 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1019 16:22:25.489824  278987 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1019 16:22:25.628653  278987 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1019 16:22:25.630580  278987 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1019 16:22:25.636837  278987 addons.go:436] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1019 16:22:25.636868  278987 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1019 16:22:25.667632  278987 addons.go:436] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1019 16:22:25.667655  278987 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1019 16:22:25.673198  278987 addons.go:436] installing /etc/kubernetes/addons/registry-svc.yaml
	I1019 16:22:25.673228  278987 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1019 16:22:25.676235  278987 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1019 16:22:25.676256  278987 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1019 16:22:25.676271  278987 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1019 16:22:25.679415  278987 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1019 16:22:25.679439  278987 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1019 16:22:25.701385  278987 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1019 16:22:25.715411  278987 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1019 16:22:25.785456  278987 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1019 16:22:25.795416  278987 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml": (1.209827881s)
	I1019 16:22:25.795435  278987 ssh_runner.go:235] Completed: sudo systemctl daemon-reload: (1.205556349s)
	I1019 16:22:25.795524  278987 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1019 16:22:25.795630  278987 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1019 16:22:25.830711  278987 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1019 16:22:25.972856  278987 addons.go:436] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1019 16:22:25.972886  278987 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1019 16:22:26.159452  278987 addons.go:436] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1019 16:22:26.159484  278987 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1019 16:22:26.163440  278987 addons.go:436] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1019 16:22:26.163477  278987 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1019 16:22:26.189597  278987 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1019 16:22:26.296995  278987 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1019 16:22:26.297031  278987 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1019 16:22:26.304779  278987 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1019 16:22:26.304802  278987 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1019 16:22:26.317109  278987 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1019 16:22:26.365977  278987 addons.go:436] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1019 16:22:26.366012  278987 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1019 16:22:26.443320  278987 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1019 16:22:26.447849  278987 addons.go:436] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1019 16:22:26.447869  278987 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1019 16:22:26.518736  278987 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1019 16:22:26.518768  278987 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1019 16:22:26.576068  278987 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1019 16:22:26.576105  278987 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1019 16:22:26.655704  278987 addons.go:436] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1019 16:22:26.655727  278987 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1019 16:22:26.741461  278987 addons.go:436] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1019 16:22:26.741485  278987 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1019 16:22:26.868162  278987 addons.go:436] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1019 16:22:26.868192  278987 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1019 16:22:26.875098  278987 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1019 16:22:26.891846  278987 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1019 16:22:27.095471  278987 addons.go:436] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1019 16:22:27.095508  278987 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1019 16:22:27.130809  278987 addons.go:436] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1019 16:22:27.130837  278987 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1019 16:22:27.341549  278987 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1019 16:22:27.341583  278987 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1019 16:22:27.426940  278987 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1019 16:22:27.660434  278987 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1019 16:22:27.660467  278987 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1019 16:22:28.138066  278987 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1019 16:22:28.138099  278987 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1019 16:22:28.520325  278987 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1019 16:22:28.520355  278987 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1019 16:22:28.827865  278987 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1019 16:22:28.827895  278987 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1019 16:22:29.098420  278987 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1019 16:22:29.560858  278987 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (4.070994693s)
	I1019 16:22:29.560913  278987 main.go:143] libmachine: Making call to close driver server
	I1019 16:22:29.560927  278987 main.go:143] libmachine: (addons-305823) Calling .Close
	I1019 16:22:29.560948  278987 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (3.932266263s)
	I1019 16:22:29.560995  278987 main.go:143] libmachine: Making call to close driver server
	I1019 16:22:29.561013  278987 main.go:143] libmachine: (addons-305823) Calling .Close
	I1019 16:22:29.561286  278987 main.go:143] libmachine: Successfully made call to close driver server
	I1019 16:22:29.561309  278987 main.go:143] libmachine: Making call to close connection to plugin binary
	I1019 16:22:29.561319  278987 main.go:143] libmachine: Making call to close driver server
	I1019 16:22:29.561328  278987 main.go:143] libmachine: (addons-305823) Calling .Close
	I1019 16:22:29.561394  278987 main.go:143] libmachine: Successfully made call to close driver server
	I1019 16:22:29.561389  278987 main.go:143] libmachine: (addons-305823) DBG | Closing plugin on server side
	I1019 16:22:29.561408  278987 main.go:143] libmachine: Making call to close connection to plugin binary
	I1019 16:22:29.561418  278987 main.go:143] libmachine: Making call to close driver server
	I1019 16:22:29.561425  278987 main.go:143] libmachine: (addons-305823) Calling .Close
	I1019 16:22:29.561626  278987 main.go:143] libmachine: (addons-305823) DBG | Closing plugin on server side
	I1019 16:22:29.561636  278987 main.go:143] libmachine: Successfully made call to close driver server
	I1019 16:22:29.561652  278987 main.go:143] libmachine: (addons-305823) DBG | Closing plugin on server side
	I1019 16:22:29.561666  278987 main.go:143] libmachine: Making call to close connection to plugin binary
	I1019 16:22:29.561718  278987 main.go:143] libmachine: Successfully made call to close driver server
	I1019 16:22:29.561729  278987 main.go:143] libmachine: Making call to close connection to plugin binary
	I1019 16:22:32.096522  278987 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1019 16:22:32.096581  278987 main.go:143] libmachine: (addons-305823) Calling .GetSSHHostname
	I1019 16:22:32.100549  278987 main.go:143] libmachine: (addons-305823) DBG | domain addons-305823 has defined MAC address 52:54:00:48:d4:0b in network mk-addons-305823
	I1019 16:22:32.101168  278987 main.go:143] libmachine: (addons-305823) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:d4:0b", ip: ""} in network mk-addons-305823: {Iface:virbr1 ExpiryTime:2025-10-19 17:21:58 +0000 UTC Type:0 Mac:52:54:00:48:d4:0b Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:addons-305823 Clientid:01:52:54:00:48:d4:0b}
	I1019 16:22:32.101206  278987 main.go:143] libmachine: (addons-305823) DBG | domain addons-305823 has defined IP address 192.168.39.11 and MAC address 52:54:00:48:d4:0b in network mk-addons-305823
	I1019 16:22:32.101397  278987 main.go:143] libmachine: (addons-305823) Calling .GetSSHPort
	I1019 16:22:32.101604  278987 main.go:143] libmachine: (addons-305823) Calling .GetSSHKeyPath
	I1019 16:22:32.101832  278987 main.go:143] libmachine: (addons-305823) Calling .GetSSHUsername
	I1019 16:22:32.102013  278987 sshutil.go:53] new ssh client: &{IP:192.168.39.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21683-274250/.minikube/machines/addons-305823/id_rsa Username:docker}
	I1019 16:22:32.254354  278987 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1019 16:22:32.307533  278987 addons.go:239] Setting addon gcp-auth=true in "addons-305823"
	I1019 16:22:32.307595  278987 host.go:66] Checking if "addons-305823" exists ...
	I1019 16:22:32.308004  278987 main.go:143] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1019 16:22:32.308043  278987 main.go:143] libmachine: Launching plugin server for driver kvm2
	I1019 16:22:32.323373  278987 main.go:143] libmachine: Plugin server listening at address 127.0.0.1:44055
	I1019 16:22:32.323935  278987 main.go:143] libmachine: () Calling .GetVersion
	I1019 16:22:32.324424  278987 main.go:143] libmachine: Using API Version  1
	I1019 16:22:32.324448  278987 main.go:143] libmachine: () Calling .SetConfigRaw
	I1019 16:22:32.324868  278987 main.go:143] libmachine: () Calling .GetMachineName
	I1019 16:22:32.325549  278987 main.go:143] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1019 16:22:32.325594  278987 main.go:143] libmachine: Launching plugin server for driver kvm2
	I1019 16:22:32.340422  278987 main.go:143] libmachine: Plugin server listening at address 127.0.0.1:34455
	I1019 16:22:32.340923  278987 main.go:143] libmachine: () Calling .GetVersion
	I1019 16:22:32.341344  278987 main.go:143] libmachine: Using API Version  1
	I1019 16:22:32.341366  278987 main.go:143] libmachine: () Calling .SetConfigRaw
	I1019 16:22:32.341732  278987 main.go:143] libmachine: () Calling .GetMachineName
	I1019 16:22:32.341938  278987 main.go:143] libmachine: (addons-305823) Calling .GetState
	I1019 16:22:32.343688  278987 main.go:143] libmachine: (addons-305823) Calling .DriverName
	I1019 16:22:32.343915  278987 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1019 16:22:32.343943  278987 main.go:143] libmachine: (addons-305823) Calling .GetSSHHostname
	I1019 16:22:32.347039  278987 main.go:143] libmachine: (addons-305823) DBG | domain addons-305823 has defined MAC address 52:54:00:48:d4:0b in network mk-addons-305823
	I1019 16:22:32.347465  278987 main.go:143] libmachine: (addons-305823) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:d4:0b", ip: ""} in network mk-addons-305823: {Iface:virbr1 ExpiryTime:2025-10-19 17:21:58 +0000 UTC Type:0 Mac:52:54:00:48:d4:0b Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:addons-305823 Clientid:01:52:54:00:48:d4:0b}
	I1019 16:22:32.347498  278987 main.go:143] libmachine: (addons-305823) DBG | domain addons-305823 has defined IP address 192.168.39.11 and MAC address 52:54:00:48:d4:0b in network mk-addons-305823
	I1019 16:22:32.347678  278987 main.go:143] libmachine: (addons-305823) Calling .GetSSHPort
	I1019 16:22:32.347878  278987 main.go:143] libmachine: (addons-305823) Calling .GetSSHKeyPath
	I1019 16:22:32.348017  278987 main.go:143] libmachine: (addons-305823) Calling .GetSSHUsername
	I1019 16:22:32.348197  278987 sshutil.go:53] new ssh client: &{IP:192.168.39.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21683-274250/.minikube/machines/addons-305823/id_rsa Username:docker}
	I1019 16:22:32.856022  278987 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (7.225381422s)
	I1019 16:22:32.856097  278987 main.go:143] libmachine: Making call to close driver server
	I1019 16:22:32.856105  278987 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (7.179796885s)
	I1019 16:22:32.856151  278987 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (7.154737644s)
	I1019 16:22:32.856188  278987 main.go:143] libmachine: Making call to close driver server
	I1019 16:22:32.856154  278987 main.go:143] libmachine: Making call to close driver server
	I1019 16:22:32.856233  278987 main.go:143] libmachine: (addons-305823) Calling .Close
	I1019 16:22:32.856273  278987 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (7.070788334s)
	W1019 16:22:32.856306  278987 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1019 16:22:32.856354  278987 retry.go:31] will retry after 365.891772ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1019 16:22:32.856187  278987 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (7.140755299s)
	I1019 16:22:32.856404  278987 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (7.060860031s)
	I1019 16:22:32.856421  278987 main.go:143] libmachine: Making call to close driver server
	I1019 16:22:32.856431  278987 main.go:143] libmachine: (addons-305823) Calling .Close
	I1019 16:22:32.856552  278987 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (7.025795191s)
	I1019 16:22:32.856560  278987 main.go:143] libmachine: Successfully made call to close driver server
	I1019 16:22:32.856567  278987 main.go:143] libmachine: (addons-305823) DBG | Closing plugin on server side
	I1019 16:22:32.856577  278987 main.go:143] libmachine: Making call to close connection to plugin binary
	I1019 16:22:32.856585  278987 main.go:143] libmachine: Making call to close driver server
	I1019 16:22:32.856204  278987 main.go:143] libmachine: (addons-305823) Calling .Close
	I1019 16:22:32.856596  278987 main.go:143] libmachine: (addons-305823) Calling .Close
	I1019 16:22:32.856620  278987 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (6.666991417s)
	I1019 16:22:32.856357  278987 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (7.060697641s)
	I1019 16:22:32.856587  278987 main.go:143] libmachine: Making call to close driver server
	I1019 16:22:32.856646  278987 main.go:143] libmachine: Making call to close driver server
	I1019 16:22:32.856653  278987 main.go:143] libmachine: (addons-305823) Calling .Close
	I1019 16:22:32.856653  278987 start.go:977] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1019 16:22:32.856710  278987 main.go:143] libmachine: (addons-305823) DBG | Closing plugin on server side
	I1019 16:22:32.856721  278987 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (6.53959266s)
	I1019 16:22:32.856738  278987 main.go:143] libmachine: Successfully made call to close driver server
	I1019 16:22:32.856745  278987 main.go:143] libmachine: Making call to close connection to plugin binary
	I1019 16:22:32.856753  278987 main.go:143] libmachine: Making call to close driver server
	I1019 16:22:32.856763  278987 main.go:143] libmachine: (addons-305823) Calling .Close
	I1019 16:22:32.856773  278987 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (6.413421134s)
	I1019 16:22:32.856790  278987 main.go:143] libmachine: Making call to close driver server
	I1019 16:22:32.856810  278987 main.go:143] libmachine: (addons-305823) Calling .Close
	I1019 16:22:32.856113  278987 main.go:143] libmachine: (addons-305823) Calling .Close
	I1019 16:22:32.856796  278987 main.go:143] libmachine: Making call to close driver server
	I1019 16:22:32.856861  278987 main.go:143] libmachine: (addons-305823) Calling .Close
	I1019 16:22:32.856890  278987 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (5.981760989s)
	I1019 16:22:32.856908  278987 main.go:143] libmachine: Making call to close driver server
	I1019 16:22:32.856917  278987 main.go:143] libmachine: (addons-305823) Calling .Close
	I1019 16:22:32.856960  278987 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (5.965085937s)
	I1019 16:22:32.856999  278987 main.go:143] libmachine: Making call to close driver server
	I1019 16:22:32.857011  278987 main.go:143] libmachine: (addons-305823) Calling .Close
	I1019 16:22:32.857171  278987 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (5.430183238s)
	W1019 16:22:32.857201  278987 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1019 16:22:32.857215  278987 retry.go:31] will retry after 359.396806ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1019 16:22:32.857321  278987 main.go:143] libmachine: Successfully made call to close driver server
	I1019 16:22:32.857350  278987 main.go:143] libmachine: (addons-305823) DBG | Closing plugin on server side
	I1019 16:22:32.857351  278987 main.go:143] libmachine: (addons-305823) DBG | Closing plugin on server side
	I1019 16:22:32.857357  278987 main.go:143] libmachine: Making call to close connection to plugin binary
	I1019 16:22:32.857370  278987 main.go:143] libmachine: (addons-305823) DBG | Closing plugin on server side
	I1019 16:22:32.857381  278987 main.go:143] libmachine: (addons-305823) DBG | Closing plugin on server side
	I1019 16:22:32.857353  278987 node_ready.go:35] waiting up to 6m0s for node "addons-305823" to be "Ready" ...
	I1019 16:22:32.857386  278987 main.go:143] libmachine: Successfully made call to close driver server
	I1019 16:22:32.857403  278987 main.go:143] libmachine: Making call to close connection to plugin binary
	I1019 16:22:32.857413  278987 main.go:143] libmachine: Making call to close driver server
	I1019 16:22:32.857414  278987 main.go:143] libmachine: Successfully made call to close driver server
	I1019 16:22:32.857408  278987 main.go:143] libmachine: Successfully made call to close driver server
	I1019 16:22:32.857420  278987 main.go:143] libmachine: (addons-305823) Calling .Close
	I1019 16:22:32.857423  278987 main.go:143] libmachine: Making call to close connection to plugin binary
	I1019 16:22:32.857427  278987 main.go:143] libmachine: Making call to close connection to plugin binary
	I1019 16:22:32.857431  278987 main.go:143] libmachine: Making call to close driver server
	I1019 16:22:32.857437  278987 main.go:143] libmachine: Making call to close driver server
	I1019 16:22:32.857441  278987 main.go:143] libmachine: (addons-305823) Calling .Close
	I1019 16:22:32.857446  278987 main.go:143] libmachine: (addons-305823) Calling .Close
	I1019 16:22:32.856656  278987 main.go:143] libmachine: (addons-305823) Calling .Close
	I1019 16:22:32.858111  278987 main.go:143] libmachine: (addons-305823) DBG | Closing plugin on server side
	I1019 16:22:32.858134  278987 main.go:143] libmachine: Successfully made call to close driver server
	I1019 16:22:32.858140  278987 main.go:143] libmachine: Making call to close connection to plugin binary
	I1019 16:22:32.858243  278987 main.go:143] libmachine: (addons-305823) DBG | Closing plugin on server side
	I1019 16:22:32.858268  278987 main.go:143] libmachine: Successfully made call to close driver server
	I1019 16:22:32.858275  278987 main.go:143] libmachine: Making call to close connection to plugin binary
	I1019 16:22:32.858329  278987 main.go:143] libmachine: (addons-305823) DBG | Closing plugin on server side
	I1019 16:22:32.858347  278987 main.go:143] libmachine: Successfully made call to close driver server
	I1019 16:22:32.858352  278987 main.go:143] libmachine: Making call to close connection to plugin binary
	I1019 16:22:32.858361  278987 addons.go:480] Verifying addon ingress=true in "addons-305823"
	I1019 16:22:32.858930  278987 main.go:143] libmachine: (addons-305823) DBG | Closing plugin on server side
	I1019 16:22:32.858965  278987 main.go:143] libmachine: Successfully made call to close driver server
	I1019 16:22:32.858972  278987 main.go:143] libmachine: Making call to close connection to plugin binary
	I1019 16:22:32.858991  278987 main.go:143] libmachine: Making call to close driver server
	I1019 16:22:32.858999  278987 main.go:143] libmachine: (addons-305823) Calling .Close
	I1019 16:22:32.859086  278987 main.go:143] libmachine: (addons-305823) DBG | Closing plugin on server side
	I1019 16:22:32.859106  278987 main.go:143] libmachine: Successfully made call to close driver server
	I1019 16:22:32.859112  278987 main.go:143] libmachine: Making call to close connection to plugin binary
	I1019 16:22:32.859120  278987 main.go:143] libmachine: Making call to close driver server
	I1019 16:22:32.859126  278987 main.go:143] libmachine: (addons-305823) Calling .Close
	I1019 16:22:32.859167  278987 main.go:143] libmachine: (addons-305823) DBG | Closing plugin on server side
	I1019 16:22:32.859187  278987 main.go:143] libmachine: Successfully made call to close driver server
	I1019 16:22:32.859194  278987 main.go:143] libmachine: Making call to close connection to plugin binary
	I1019 16:22:32.859201  278987 main.go:143] libmachine: Making call to close driver server
	I1019 16:22:32.859207  278987 main.go:143] libmachine: (addons-305823) Calling .Close
	I1019 16:22:32.861274  278987 main.go:143] libmachine: (addons-305823) DBG | Closing plugin on server side
	I1019 16:22:32.861339  278987 main.go:143] libmachine: Successfully made call to close driver server
	I1019 16:22:32.861347  278987 main.go:143] libmachine: Making call to close connection to plugin binary
	I1019 16:22:32.862537  278987 out.go:179] * Verifying ingress addon...
	I1019 16:22:32.863812  278987 main.go:143] libmachine: (addons-305823) DBG | Closing plugin on server side
	I1019 16:22:32.863835  278987 main.go:143] libmachine: (addons-305823) DBG | Closing plugin on server side
	I1019 16:22:32.863858  278987 main.go:143] libmachine: Successfully made call to close driver server
	I1019 16:22:32.863864  278987 main.go:143] libmachine: Making call to close connection to plugin binary
	I1019 16:22:32.863873  278987 addons.go:480] Verifying addon registry=true in "addons-305823"
	I1019 16:22:32.863878  278987 main.go:143] libmachine: (addons-305823) DBG | Closing plugin on server side
	I1019 16:22:32.863917  278987 main.go:143] libmachine: Successfully made call to close driver server
	I1019 16:22:32.863925  278987 main.go:143] libmachine: Making call to close connection to plugin binary
	I1019 16:22:32.863933  278987 main.go:143] libmachine: Making call to close driver server
	I1019 16:22:32.863940  278987 main.go:143] libmachine: (addons-305823) Calling .Close
	I1019 16:22:32.864074  278987 main.go:143] libmachine: Successfully made call to close driver server
	I1019 16:22:32.864083  278987 main.go:143] libmachine: Making call to close connection to plugin binary
	I1019 16:22:32.864089  278987 main.go:143] libmachine: (addons-305823) DBG | Closing plugin on server side
	I1019 16:22:32.864092  278987 main.go:143] libmachine: Making call to close driver server
	I1019 16:22:32.864108  278987 main.go:143] libmachine: (addons-305823) Calling .Close
	I1019 16:22:32.864107  278987 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-305823 service yakd-dashboard -n yakd-dashboard
	
	I1019 16:22:32.864112  278987 main.go:143] libmachine: Successfully made call to close driver server
	I1019 16:22:32.864170  278987 main.go:143] libmachine: Successfully made call to close driver server
	I1019 16:22:32.864183  278987 main.go:143] libmachine: Making call to close connection to plugin binary
	I1019 16:22:32.864119  278987 main.go:143] libmachine: Making call to close connection to plugin binary
	I1019 16:22:32.864268  278987 addons.go:480] Verifying addon metrics-server=true in "addons-305823"
	I1019 16:22:32.864797  278987 main.go:143] libmachine: (addons-305823) DBG | Closing plugin on server side
	I1019 16:22:32.864858  278987 main.go:143] libmachine: Successfully made call to close driver server
	I1019 16:22:32.864918  278987 main.go:143] libmachine: Making call to close connection to plugin binary
	I1019 16:22:32.865891  278987 out.go:179] * Verifying registry addon...
	I1019 16:22:32.865913  278987 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1019 16:22:32.866086  278987 main.go:143] libmachine: Successfully made call to close driver server
	I1019 16:22:32.866217  278987 main.go:143] libmachine: Making call to close connection to plugin binary
	I1019 16:22:32.866157  278987 main.go:143] libmachine: (addons-305823) DBG | Closing plugin on server side
	I1019 16:22:32.867678  278987 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1019 16:22:32.870337  278987 node_ready.go:49] node "addons-305823" is "Ready"
	I1019 16:22:32.870370  278987 node_ready.go:38] duration metric: took 12.966836ms for node "addons-305823" to be "Ready" ...
	I1019 16:22:32.870388  278987 api_server.go:52] waiting for apiserver process to appear ...
	I1019 16:22:32.870435  278987 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1019 16:22:32.905862  278987 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1019 16:22:32.905885  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:32.906263  278987 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1019 16:22:32.906280  278987 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:32.924674  278987 main.go:143] libmachine: Making call to close driver server
	I1019 16:22:32.924693  278987 main.go:143] libmachine: (addons-305823) Calling .Close
	I1019 16:22:32.924939  278987 main.go:143] libmachine: Successfully made call to close driver server
	I1019 16:22:32.924955  278987 main.go:143] libmachine: Making call to close connection to plugin binary
	I1019 16:22:32.928519  278987 main.go:143] libmachine: Making call to close driver server
	I1019 16:22:32.928534  278987 main.go:143] libmachine: (addons-305823) Calling .Close
	I1019 16:22:32.928747  278987 main.go:143] libmachine: Successfully made call to close driver server
	I1019 16:22:32.928765  278987 main.go:143] libmachine: Making call to close connection to plugin binary
	W1019 16:22:32.928862  278987 out.go:285] ! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class local-path as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I1019 16:22:33.217361  278987 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1019 16:22:33.223144  278987 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1019 16:22:33.371438  278987 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-305823" context rescaled to 1 replicas
	I1019 16:22:33.373457  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:33.374204  278987 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:33.862899  278987 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (4.764391255s)
	I1019 16:22:33.862955  278987 api_server.go:72] duration metric: took 9.277342136s to wait for apiserver process to appear ...
	I1019 16:22:33.862969  278987 api_server.go:88] waiting for apiserver healthz status ...
	I1019 16:22:33.862925  278987 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (1.518992592s)
	I1019 16:22:33.863024  278987 api_server.go:253] Checking apiserver healthz at https://192.168.39.11:8443/healthz ...
	I1019 16:22:33.862973  278987 main.go:143] libmachine: Making call to close driver server
	I1019 16:22:33.863156  278987 main.go:143] libmachine: (addons-305823) Calling .Close
	I1019 16:22:33.863492  278987 main.go:143] libmachine: (addons-305823) DBG | Closing plugin on server side
	I1019 16:22:33.863534  278987 main.go:143] libmachine: Successfully made call to close driver server
	I1019 16:22:33.863553  278987 main.go:143] libmachine: Making call to close connection to plugin binary
	I1019 16:22:33.863563  278987 main.go:143] libmachine: Making call to close driver server
	I1019 16:22:33.863572  278987 main.go:143] libmachine: (addons-305823) Calling .Close
	I1019 16:22:33.863789  278987 main.go:143] libmachine: (addons-305823) DBG | Closing plugin on server side
	I1019 16:22:33.863819  278987 main.go:143] libmachine: Successfully made call to close driver server
	I1019 16:22:33.863826  278987 main.go:143] libmachine: Making call to close connection to plugin binary
	I1019 16:22:33.863837  278987 addons.go:480] Verifying addon csi-hostpath-driver=true in "addons-305823"
	I1019 16:22:33.864252  278987 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1019 16:22:33.865078  278987 out.go:179] * Verifying csi-hostpath-driver addon...
	I1019 16:22:33.865956  278987 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1019 16:22:33.866853  278987 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1019 16:22:33.866869  278987 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1019 16:22:33.866941  278987 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1019 16:22:33.880840  278987 api_server.go:279] https://192.168.39.11:8443/healthz returned 200:
	ok
	I1019 16:22:33.882035  278987 api_server.go:141] control plane version: v1.34.1
	I1019 16:22:33.882058  278987 api_server.go:131] duration metric: took 19.081707ms to wait for apiserver health ...
	I1019 16:22:33.882070  278987 system_pods.go:43] waiting for kube-system pods to appear ...
	I1019 16:22:33.909056  278987 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:33.909055  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:33.909277  278987 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1019 16:22:33.909305  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:33.910181  278987 system_pods.go:59] 20 kube-system pods found
	I1019 16:22:33.910223  278987 system_pods.go:61] "amd-gpu-device-plugin-c8fj2" [eb0683d4-291a-49e4-a60e-470700b7a804] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1019 16:22:33.910240  278987 system_pods.go:61] "coredns-66bc5c9577-dvhs7" [72c5f4f8-21ed-4334-8345-455781b7b29f] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1019 16:22:33.910255  278987 system_pods.go:61] "coredns-66bc5c9577-g2x7b" [b3a9f036-63b2-429e-9f7c-aabe1aab698c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1019 16:22:33.910269  278987 system_pods.go:61] "csi-hostpath-attacher-0" [e1ab3346-13c0-4907-af41-1723d9a2454d] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1019 16:22:33.910277  278987 system_pods.go:61] "csi-hostpath-resizer-0" [7aa89fab-980e-4dda-ac1c-2c2f4737a86f] Pending
	I1019 16:22:33.910283  278987 system_pods.go:61] "csi-hostpathplugin-jpdbc" [caa2c70a-ca56-43a6-91fe-d94b53794b7e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1019 16:22:33.910297  278987 system_pods.go:61] "etcd-addons-305823" [51cbd091-70c5-4935-bc54-f2ba4220e6b9] Running
	I1019 16:22:33.910313  278987 system_pods.go:61] "kube-apiserver-addons-305823" [5a6b39cb-40a1-4701-bd23-da97be4e6a52] Running
	I1019 16:22:33.910319  278987 system_pods.go:61] "kube-controller-manager-addons-305823" [e6c6d0d0-66dc-4b58-8c9c-7e840ac8438a] Running
	I1019 16:22:33.910325  278987 system_pods.go:61] "kube-ingress-dns-minikube" [1d4e6155-2eaf-4ca7-8bcc-2c038d370e02] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1019 16:22:33.910333  278987 system_pods.go:61] "kube-proxy-46rm2" [6e2508c5-37d4-4052-9989-f3fc5bd3258c] Running
	I1019 16:22:33.910338  278987 system_pods.go:61] "kube-scheduler-addons-305823" [f85da7d3-fbb0-479f-9947-665cd76d150a] Running
	I1019 16:22:33.910353  278987 system_pods.go:61] "metrics-server-85b7d694d7-4blgt" [e0c8ad3b-5bc4-4de6-9e24-70745935d251] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1019 16:22:33.910365  278987 system_pods.go:61] "nvidia-device-plugin-daemonset-dw8kx" [997025d9-b384-42a1-8304-7dc9cd3983b3] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1019 16:22:33.910387  278987 system_pods.go:61] "registry-6b586f9694-nsk6g" [cb1792d7-001f-4116-9d14-81d9bf1296bd] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1019 16:22:33.910395  278987 system_pods.go:61] "registry-creds-764b6fb674-pmhck" [a8282e68-1ef5-490b-836b-afcd00d1de50] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1019 16:22:33.910399  278987 system_pods.go:61] "registry-proxy-csgnm" [feb6fb77-6deb-4201-83f4-2cdb0d1c4c94] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1019 16:22:33.910406  278987 system_pods.go:61] "snapshot-controller-7d9fbc56b8-gbsrk" [6a48fa26-a2be-430b-ad6f-d3edcfdc19b9] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1019 16:22:33.910413  278987 system_pods.go:61] "snapshot-controller-7d9fbc56b8-xd424" [747dcc2c-0e24-44ed-8cda-19705b1a18a8] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1019 16:22:33.910417  278987 system_pods.go:61] "storage-provisioner" [151104a1-9c64-4621-9104-e70f0aba809f] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1019 16:22:33.910429  278987 system_pods.go:74] duration metric: took 28.353491ms to wait for pod list to return data ...
	I1019 16:22:33.910443  278987 default_sa.go:34] waiting for default service account to be created ...
	I1019 16:22:33.935635  278987 default_sa.go:45] found service account: "default"
	I1019 16:22:33.935686  278987 default_sa.go:55] duration metric: took 25.209858ms for default service account to be created ...
	I1019 16:22:33.935702  278987 system_pods.go:116] waiting for k8s-apps to be running ...
	I1019 16:22:33.957468  278987 system_pods.go:86] 20 kube-system pods found
	I1019 16:22:33.957535  278987 system_pods.go:89] "amd-gpu-device-plugin-c8fj2" [eb0683d4-291a-49e4-a60e-470700b7a804] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1019 16:22:33.957553  278987 system_pods.go:89] "coredns-66bc5c9577-dvhs7" [72c5f4f8-21ed-4334-8345-455781b7b29f] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1019 16:22:33.957565  278987 system_pods.go:89] "coredns-66bc5c9577-g2x7b" [b3a9f036-63b2-429e-9f7c-aabe1aab698c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1019 16:22:33.957580  278987 system_pods.go:89] "csi-hostpath-attacher-0" [e1ab3346-13c0-4907-af41-1723d9a2454d] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1019 16:22:33.957590  278987 system_pods.go:89] "csi-hostpath-resizer-0" [7aa89fab-980e-4dda-ac1c-2c2f4737a86f] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1019 16:22:33.957604  278987 system_pods.go:89] "csi-hostpathplugin-jpdbc" [caa2c70a-ca56-43a6-91fe-d94b53794b7e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1019 16:22:33.957612  278987 system_pods.go:89] "etcd-addons-305823" [51cbd091-70c5-4935-bc54-f2ba4220e6b9] Running
	I1019 16:22:33.957619  278987 system_pods.go:89] "kube-apiserver-addons-305823" [5a6b39cb-40a1-4701-bd23-da97be4e6a52] Running
	I1019 16:22:33.957626  278987 system_pods.go:89] "kube-controller-manager-addons-305823" [e6c6d0d0-66dc-4b58-8c9c-7e840ac8438a] Running
	I1019 16:22:33.957641  278987 system_pods.go:89] "kube-ingress-dns-minikube" [1d4e6155-2eaf-4ca7-8bcc-2c038d370e02] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1019 16:22:33.957649  278987 system_pods.go:89] "kube-proxy-46rm2" [6e2508c5-37d4-4052-9989-f3fc5bd3258c] Running
	I1019 16:22:33.957657  278987 system_pods.go:89] "kube-scheduler-addons-305823" [f85da7d3-fbb0-479f-9947-665cd76d150a] Running
	I1019 16:22:33.957666  278987 system_pods.go:89] "metrics-server-85b7d694d7-4blgt" [e0c8ad3b-5bc4-4de6-9e24-70745935d251] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1019 16:22:33.957679  278987 system_pods.go:89] "nvidia-device-plugin-daemonset-dw8kx" [997025d9-b384-42a1-8304-7dc9cd3983b3] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1019 16:22:33.957688  278987 system_pods.go:89] "registry-6b586f9694-nsk6g" [cb1792d7-001f-4116-9d14-81d9bf1296bd] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1019 16:22:33.957701  278987 system_pods.go:89] "registry-creds-764b6fb674-pmhck" [a8282e68-1ef5-490b-836b-afcd00d1de50] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1019 16:22:33.957744  278987 system_pods.go:89] "registry-proxy-csgnm" [feb6fb77-6deb-4201-83f4-2cdb0d1c4c94] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1019 16:22:33.957755  278987 system_pods.go:89] "snapshot-controller-7d9fbc56b8-gbsrk" [6a48fa26-a2be-430b-ad6f-d3edcfdc19b9] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1019 16:22:33.957768  278987 system_pods.go:89] "snapshot-controller-7d9fbc56b8-xd424" [747dcc2c-0e24-44ed-8cda-19705b1a18a8] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1019 16:22:33.957777  278987 system_pods.go:89] "storage-provisioner" [151104a1-9c64-4621-9104-e70f0aba809f] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1019 16:22:33.957801  278987 system_pods.go:126] duration metric: took 22.080414ms to wait for k8s-apps to be running ...
	I1019 16:22:33.957815  278987 system_svc.go:44] waiting for kubelet service to be running ....
	I1019 16:22:33.957902  278987 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1019 16:22:33.992636  278987 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1019 16:22:33.992665  278987 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1019 16:22:34.144942  278987 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1019 16:22:34.144969  278987 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1019 16:22:34.258266  278987 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1019 16:22:34.373763  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:34.373929  278987 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:34.376882  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:34.874682  278987 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:34.875339  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:34.878055  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:35.374699  278987 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:35.375520  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:35.376666  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:35.696937  278987 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.479503139s)
	I1019 16:22:35.697045  278987 main.go:143] libmachine: Making call to close driver server
	I1019 16:22:35.697075  278987 main.go:143] libmachine: (addons-305823) Calling .Close
	I1019 16:22:35.697388  278987 main.go:143] libmachine: Successfully made call to close driver server
	I1019 16:22:35.697406  278987 main.go:143] libmachine: Making call to close connection to plugin binary
	I1019 16:22:35.697433  278987 main.go:143] libmachine: Making call to close driver server
	I1019 16:22:35.697441  278987 main.go:143] libmachine: (addons-305823) Calling .Close
	I1019 16:22:35.697726  278987 main.go:143] libmachine: Successfully made call to close driver server
	I1019 16:22:35.697741  278987 main.go:143] libmachine: (addons-305823) DBG | Closing plugin on server side
	I1019 16:22:35.697744  278987 main.go:143] libmachine: Making call to close connection to plugin binary
	I1019 16:22:35.907230  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:35.907236  278987 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:35.912538  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:36.083643  278987 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (2.860456654s)
	W1019 16:22:36.083685  278987 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1019 16:22:36.083704  278987 retry.go:31] will retry after 198.696777ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1019 16:22:36.083782  278987 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (2.125816394s)
	I1019 16:22:36.083827  278987 system_svc.go:56] duration metric: took 2.126006865s WaitForService to wait for kubelet
	I1019 16:22:36.083830  278987 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.825516176s)
	I1019 16:22:36.083841  278987 kubeadm.go:587] duration metric: took 11.498226334s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1019 16:22:36.083874  278987 node_conditions.go:102] verifying NodePressure condition ...
	I1019 16:22:36.083883  278987 main.go:143] libmachine: Making call to close driver server
	I1019 16:22:36.083911  278987 main.go:143] libmachine: (addons-305823) Calling .Close
	I1019 16:22:36.084198  278987 main.go:143] libmachine: Successfully made call to close driver server
	I1019 16:22:36.084213  278987 main.go:143] libmachine: Making call to close connection to plugin binary
	I1019 16:22:36.084221  278987 main.go:143] libmachine: Making call to close driver server
	I1019 16:22:36.084228  278987 main.go:143] libmachine: (addons-305823) Calling .Close
	I1019 16:22:36.084490  278987 main.go:143] libmachine: Successfully made call to close driver server
	I1019 16:22:36.084511  278987 main.go:143] libmachine: Making call to close connection to plugin binary
	I1019 16:22:36.085464  278987 addons.go:480] Verifying addon gcp-auth=true in "addons-305823"
	I1019 16:22:36.087499  278987 out.go:179] * Verifying gcp-auth addon...
	I1019 16:22:36.089085  278987 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1019 16:22:36.102875  278987 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1019 16:22:36.102897  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:36.103637  278987 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1019 16:22:36.103671  278987 node_conditions.go:123] node cpu capacity is 2
	I1019 16:22:36.103688  278987 node_conditions.go:105] duration metric: took 19.807275ms to run NodePressure ...
	I1019 16:22:36.103702  278987 start.go:242] waiting for startup goroutines ...
	I1019 16:22:36.282940  278987 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1019 16:22:36.375433  278987 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:36.376641  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:36.377807  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:36.596642  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:36.874418  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:36.875118  278987 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:36.878024  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:37.097591  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:37.376425  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:37.376541  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:37.377288  278987 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:37.515545  278987 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.232548304s)
	W1019 16:22:37.515590  278987 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1019 16:22:37.515614  278987 retry.go:31] will retry after 456.87085ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1019 16:22:37.594774  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:37.877602  278987 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:37.880849  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:37.880891  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:37.973035  278987 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1019 16:22:38.093529  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:38.372788  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:38.372824  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:38.373852  278987 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:38.595608  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:38.874124  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:38.878502  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:38.879122  278987 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:39.092903  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:39.193007  278987 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.219912711s)
	W1019 16:22:39.193047  278987 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1019 16:22:39.193071  278987 retry.go:31] will retry after 645.093929ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1019 16:22:39.372314  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:39.372337  278987 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:39.373617  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:39.594043  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:39.839251  278987 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1019 16:22:39.875418  278987 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:39.877462  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:39.878879  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:40.096458  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:40.371465  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:40.375393  278987 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:40.377016  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:40.594399  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:40.871449  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:40.871546  278987 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:40.873927  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:40.892035  278987 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.052737963s)
	W1019 16:22:40.892091  278987 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1019 16:22:40.892120  278987 retry.go:31] will retry after 1.697681389s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1019 16:22:41.093896  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:41.373117  278987 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:41.374718  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:41.375547  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:41.594950  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:41.871774  278987 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:41.872788  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:41.875640  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:42.093790  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:42.369881  278987 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:42.374692  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:42.375426  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:42.589931  278987 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1019 16:22:42.593108  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:42.871454  278987 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:42.872125  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:42.875026  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:43.111086  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:43.372006  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:43.372091  278987 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:43.374316  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:43.597904  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:43.617227  278987 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.027244327s)
	W1019 16:22:43.617270  278987 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1019 16:22:43.617295  278987 retry.go:31] will retry after 2.62792937s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1019 16:22:43.875055  278987 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:43.875547  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:43.875697  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:44.093415  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:44.373340  278987 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:44.375271  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:44.375489  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:44.593153  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:44.870482  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:44.873965  278987 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:44.874053  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:45.095078  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:45.371024  278987 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:45.371259  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:45.372038  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:45.800459  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:45.869682  278987 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:45.872641  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:45.873210  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:46.093275  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:46.245523  278987 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1019 16:22:46.370673  278987 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:46.373543  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:46.373887  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:46.593865  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:46.872544  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:46.872728  278987 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:46.873563  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:47.094507  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1019 16:22:47.151135  278987 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1019 16:22:47.151175  278987 retry.go:31] will retry after 3.223199545s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1019 16:22:47.392037  278987 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:47.396799  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:47.397269  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:47.593062  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:47.882801  278987 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:47.883816  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:47.883920  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:48.094310  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:48.369256  278987 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:48.371148  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:48.372277  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:48.593139  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:48.870503  278987 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:48.870974  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:48.871661  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:49.092831  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:49.373596  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:49.373915  278987 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:49.375107  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:49.592245  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:49.919186  278987 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:49.920348  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:49.920609  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:50.097141  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:50.374240  278987 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:50.374332  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:50.374614  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:50.374718  278987 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1019 16:22:50.596921  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:50.879156  278987 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:50.880068  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:50.880559  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:51.093344  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1019 16:22:51.170673  278987 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1019 16:22:51.170714  278987 retry.go:31] will retry after 4.08918991s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1019 16:22:51.371404  278987 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:51.373066  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:51.373094  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:51.592698  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:51.870325  278987 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:51.871521  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:51.872099  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:52.093018  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:52.370738  278987 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:52.370957  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:52.372244  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:52.592512  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:52.871953  278987 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:52.871953  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:52.875468  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:53.094665  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:53.371195  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:53.372317  278987 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:53.373505  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:53.596800  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:53.871148  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:53.871831  278987 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:53.872878  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:54.093802  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:54.371125  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:54.371186  278987 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:54.372162  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:54.592129  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:54.893438  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:54.893727  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:54.893853  278987 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:55.092805  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:55.261110  278987 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1019 16:22:55.373237  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:55.373408  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:55.374904  278987 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:55.594667  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:55.871773  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:55.872084  278987 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:55.875345  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1019 16:22:56.007518  278987 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1019 16:22:56.007565  278987 retry.go:31] will retry after 4.009499246s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1019 16:22:56.093509  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:56.372426  278987 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:56.373367  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:56.375627  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:56.594100  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:56.873224  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:56.873805  278987 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:56.875345  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:57.092113  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:57.372720  278987 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:57.375535  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:57.375793  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:57.593871  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:57.872199  278987 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:57.876785  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:57.877145  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:58.093730  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:58.372496  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:58.373791  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:58.374165  278987 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:58.592252  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:58.871871  278987 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:58.872777  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:58.873480  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:59.094150  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:59.375805  278987 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:59.377364  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:59.377989  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:22:59.591875  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:22:59.871365  278987 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:22:59.872533  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:22:59.872862  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:23:00.018146  278987 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1019 16:23:00.095589  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:23:00.373146  278987 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:23:00.373218  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:23:00.373383  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:23:00.593052  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1019 16:23:00.706097  278987 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1019 16:23:00.706137  278987 retry.go:31] will retry after 7.716631021s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1019 16:23:00.869957  278987 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:23:00.871282  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:23:00.871416  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:23:01.092532  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:23:01.370135  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:23:01.370970  278987 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:23:01.372049  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:23:01.592528  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:23:01.875622  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:23:01.876188  278987 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:23:01.877125  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:23:02.092678  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:23:02.370545  278987 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:23:02.371038  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:23:02.371098  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:23:02.595113  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:23:02.871302  278987 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:23:02.876290  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:23:02.876582  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:23:03.095450  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:23:03.372628  278987 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:23:03.373118  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:23:03.376510  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:23:03.593291  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:23:03.871087  278987 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:23:03.873028  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:23:03.873437  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:23:04.101249  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:23:04.370257  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:23:04.370321  278987 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:23:04.371653  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:23:04.595171  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:23:04.874047  278987 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:23:04.876430  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:23:04.876531  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:23:05.093944  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:23:05.373233  278987 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:23:05.373266  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:23:05.376966  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:23:05.592943  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:23:05.870780  278987 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:23:05.873670  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:23:05.873798  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:23:06.182520  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:23:06.386738  278987 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:23:06.386923  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:23:06.387586  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:23:06.595268  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:23:06.870658  278987 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:23:06.873361  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:23:06.873517  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:23:07.093689  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:23:07.371158  278987 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:23:07.371158  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:23:07.372850  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:23:07.594160  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:23:07.971281  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:23:07.971281  278987 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:23:07.975141  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:23:08.093861  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:23:08.373632  278987 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:23:08.375373  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:23:08.375432  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:23:08.423633  278987 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1019 16:23:08.594519  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:23:08.872124  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:23:08.872589  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:23:08.872844  278987 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:23:09.093782  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:23:09.371669  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:23:09.371738  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:23:09.372031  278987 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:23:09.520738  278987 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.097063073s)
	W1019 16:23:09.520788  278987 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1019 16:23:09.520813  278987 retry.go:31] will retry after 20.157290031s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1019 16:23:09.592957  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:23:09.875877  278987 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:23:09.877765  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:23:09.877893  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:23:10.092580  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:23:10.372663  278987 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:23:10.372829  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:23:10.373433  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:23:10.593456  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:23:10.874966  278987 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:23:10.875999  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:23:10.877202  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:23:11.092215  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:23:11.369454  278987 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:23:11.371170  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:23:11.371444  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:23:11.592739  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:23:11.869926  278987 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:23:11.870576  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:23:11.871406  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:23:12.093013  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:23:12.372650  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:23:12.374022  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:23:12.374054  278987 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:23:12.593822  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:23:12.873997  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:23:12.874387  278987 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:23:12.875163  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:23:13.094237  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:23:13.369578  278987 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:23:13.371479  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:23:13.372413  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:23:13.599369  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:23:13.872925  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:23:13.873108  278987 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:23:13.873166  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:23:14.093810  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:23:14.370256  278987 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:23:14.370663  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:23:14.371395  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:23:14.592807  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:23:14.870240  278987 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:23:14.871473  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:23:14.871607  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:23:15.092738  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:23:15.370609  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:23:15.371068  278987 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:23:15.371308  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:23:15.592397  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:23:15.870058  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:23:15.870085  278987 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:23:15.871464  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:23:16.092382  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:23:16.372957  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:23:16.373020  278987 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:23:16.374354  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:23:16.593318  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:23:16.872993  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:23:16.874336  278987 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:23:16.874535  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:23:17.095278  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:23:17.370571  278987 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:23:17.371891  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:23:17.372880  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:23:17.594251  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:23:17.872688  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:23:17.873225  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:23:17.873932  278987 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:23:18.094707  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:23:18.370964  278987 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:23:18.371576  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:23:18.372094  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:23:18.592847  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:23:18.872158  278987 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:23:18.873218  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:23:18.876117  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1019 16:23:19.093116  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:23:19.372131  278987 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:23:19.372247  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:23:19.373525  278987 kapi.go:107] duration metric: took 46.505847595s to wait for kubernetes.io/minikube-addons=registry ...
	I1019 16:23:19.595288  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:23:19.870213  278987 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:23:19.871910  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:23:20.092991  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:23:20.371450  278987 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:23:20.371931  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:23:20.593066  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:23:20.869855  278987 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:23:20.871205  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:23:21.092139  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:23:21.370846  278987 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:23:21.371820  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:23:21.593153  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:23:21.869574  278987 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:23:21.870382  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:23:22.092567  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:23:22.372020  278987 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:23:22.372053  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:23:22.593677  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:23:22.871180  278987 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:23:22.872062  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:23:23.092351  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:23:23.372747  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:23:23.372776  278987 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:23:23.592860  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:23:23.871197  278987 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:23:23.871357  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:23:24.092378  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:23:24.371232  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:23:24.371588  278987 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:23:24.593039  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:23:24.870020  278987 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:23:24.871727  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:23:25.095318  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:23:25.372685  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:23:25.375957  278987 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:23:25.593923  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:23:25.868923  278987 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:23:25.872101  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:23:26.091967  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:23:26.518579  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:23:26.519800  278987 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:23:26.593676  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:23:26.870632  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:23:26.871268  278987 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:23:27.091954  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:23:27.371008  278987 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:23:27.374781  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:23:27.593197  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:23:27.871142  278987 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:23:27.872364  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:23:28.189755  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:23:28.373015  278987 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:23:28.373116  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:23:28.593392  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:23:28.869613  278987 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:23:28.870623  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:23:29.093333  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:23:29.370729  278987 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:23:29.371344  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:23:29.592259  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:23:29.678805  278987 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1019 16:23:29.869500  278987 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:23:29.872393  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:23:30.094316  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:23:30.371509  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:23:30.371765  278987 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:23:30.594885  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1019 16:23:30.660533  278987 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1019 16:23:30.660570  278987 retry.go:31] will retry after 29.272895004s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1019 16:23:30.874106  278987 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:23:30.875039  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:23:31.093379  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:23:31.377299  278987 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:23:31.383113  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:23:31.596554  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:23:31.872067  278987 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:23:31.872274  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:23:32.092714  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:23:32.372274  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:23:32.372275  278987 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:23:32.596198  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:23:32.873368  278987 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:23:32.873661  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:23:33.094079  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:23:33.380136  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:23:33.380517  278987 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:23:33.594283  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:23:33.873938  278987 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:23:33.877275  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:23:34.093315  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:23:34.375086  278987 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:23:34.375309  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:23:34.594529  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:23:34.869685  278987 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:23:34.873852  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:23:35.093388  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:23:35.372672  278987 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:23:35.373104  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:23:35.592939  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:23:35.871298  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:23:35.873098  278987 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:23:36.093100  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:23:36.374273  278987 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:23:36.374363  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:23:36.592212  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:23:36.871788  278987 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:23:36.872475  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:23:37.095571  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:23:37.373198  278987 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:23:37.373245  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:23:37.593810  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:23:37.872318  278987 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:23:37.873824  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:23:38.106641  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:23:38.370712  278987 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:23:38.371189  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:23:38.592697  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:23:38.878810  278987 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:23:38.879374  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:23:39.092306  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:23:39.374603  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:23:39.374738  278987 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:23:39.602761  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:23:39.871062  278987 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:23:39.871263  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:23:40.092767  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:23:40.371591  278987 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:23:40.371645  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:23:40.594452  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:23:40.877196  278987 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:23:40.877704  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:23:41.093719  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:23:41.370262  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:23:41.370742  278987 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:23:41.593848  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:23:41.871550  278987 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:23:41.874324  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:23:42.093054  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:23:42.369585  278987 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:23:42.372781  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:23:42.593544  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:23:42.871218  278987 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:23:42.871272  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:23:43.092792  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:23:43.370861  278987 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:23:43.374837  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:23:43.593216  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:23:44.008538  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:23:44.011776  278987 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:23:44.093529  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:23:44.370767  278987 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:23:44.374322  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:23:44.593696  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:23:44.871917  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:23:44.873690  278987 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:23:45.094070  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:23:45.380722  278987 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:23:45.381267  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:23:45.598383  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:23:45.879922  278987 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:23:45.880071  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:23:46.238813  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:23:46.371006  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:23:46.371732  278987 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:23:46.593730  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:23:46.877661  278987 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:23:46.877939  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:23:47.095198  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:23:47.370531  278987 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:23:47.371140  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:23:47.594347  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:23:47.877744  278987 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:23:47.885304  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:23:48.170550  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:23:48.371795  278987 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1019 16:23:48.372281  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:23:48.592094  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:23:48.871666  278987 kapi.go:107] duration metric: took 1m16.00575774s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1019 16:23:48.872707  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:23:49.096168  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:23:49.372288  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:23:49.592679  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:23:49.879908  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:23:50.094782  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:23:50.373004  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:23:50.593457  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:23:50.870953  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:23:51.093773  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:23:51.371333  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:23:51.593547  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:23:51.871372  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:23:52.092017  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:23:52.370455  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:23:52.592298  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:23:52.871407  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:23:53.092476  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:23:53.370759  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:23:53.595233  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:23:53.873309  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:23:54.091773  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:23:54.372578  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:23:54.595300  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:23:54.872512  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:23:55.092713  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:23:55.371497  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:23:55.592887  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:23:55.871638  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:23:56.093609  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:23:56.374321  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:23:56.593576  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:23:56.871719  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:23:57.094685  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:23:57.372495  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:23:57.594431  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:23:57.871601  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:23:58.096715  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:23:58.371544  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:23:58.592524  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:23:58.873933  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:23:59.094204  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:23:59.371245  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1019 16:23:59.595188  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:23:59.876821  278987 kapi.go:107] duration metric: took 1m26.009873704s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1019 16:23:59.933937  278987 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1019 16:24:00.093803  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1019 16:24:00.585628  278987 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1019 16:24:00.585674  278987 retry.go:31] will retry after 17.071599485s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1019 16:24:00.592825  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:24:01.092621  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:24:01.594222  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:24:02.092851  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:24:02.592718  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:24:03.094007  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:24:03.592744  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:24:04.093573  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:24:04.592919  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:24:05.093424  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:24:05.594245  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:24:06.092699  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:24:06.592108  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:24:07.092989  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:24:07.592690  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:24:08.093964  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:24:08.592150  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:24:09.092721  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:24:09.592236  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:24:10.092441  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:24:10.592835  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:24:11.092835  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:24:11.592673  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:24:12.093745  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:24:12.592899  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:24:13.092563  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:24:13.592930  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:24:14.092836  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:24:14.593307  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:24:15.093438  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:24:15.592833  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:24:16.092079  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:24:16.592107  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:24:17.093324  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:24:17.593062  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:24:17.658200  278987 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1019 16:24:18.094403  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1019 16:24:18.305576  278987 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1019 16:24:18.305681  278987 main.go:143] libmachine: Making call to close driver server
	I1019 16:24:18.305700  278987 main.go:143] libmachine: (addons-305823) Calling .Close
	I1019 16:24:18.305973  278987 main.go:143] libmachine: Successfully made call to close driver server
	I1019 16:24:18.306005  278987 main.go:143] libmachine: Making call to close connection to plugin binary
	I1019 16:24:18.306015  278987 main.go:143] libmachine: Making call to close driver server
	I1019 16:24:18.306022  278987 main.go:143] libmachine: (addons-305823) Calling .Close
	I1019 16:24:18.306031  278987 main.go:143] libmachine: (addons-305823) DBG | Closing plugin on server side
	I1019 16:24:18.306261  278987 main.go:143] libmachine: Successfully made call to close driver server
	I1019 16:24:18.306278  278987 main.go:143] libmachine: Making call to close connection to plugin binary
	W1019 16:24:18.306384  278987 out.go:285] ! Enabling 'inspektor-gadget' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1019 16:24:18.592141  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:24:19.096015  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:24:19.593637  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:24:20.093250  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:24:20.593173  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:24:21.093222  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:24:21.593691  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:24:22.093262  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:24:22.593068  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:24:23.093361  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:24:23.593399  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:24:24.093163  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:24:24.592684  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:24:25.093174  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:24:25.592693  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:24:26.093772  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:24:26.592213  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:24:27.093232  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:24:27.593381  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:24:28.093414  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:24:28.592205  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:24:29.097094  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:24:29.595178  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:24:30.092965  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:24:30.592151  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:24:31.093241  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:24:31.592755  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:24:32.092656  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:24:32.592454  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:24:33.093275  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:24:33.592871  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:24:34.092977  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:24:34.592580  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:24:35.092475  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:24:35.593544  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:24:36.092761  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:24:36.593014  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:24:37.093182  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:24:37.593286  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:24:38.093585  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:24:38.593916  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:24:39.094235  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:24:39.593474  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:24:40.092814  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:24:40.592519  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:24:41.093702  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:24:41.592254  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:24:42.092741  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:24:42.593069  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:24:43.093068  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:24:43.592329  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:24:44.093745  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:24:44.592742  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:24:45.093099  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:24:45.592935  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:24:46.093445  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:24:46.594246  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:24:47.094021  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:24:47.592715  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:24:48.092666  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:24:48.593291  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:24:49.095327  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:24:49.593576  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:24:50.093355  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:24:50.592764  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:24:51.093408  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:24:51.593159  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:24:52.094445  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:24:52.593751  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:24:53.093037  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:24:53.592969  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:24:54.092425  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:24:54.593126  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:24:55.097844  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:24:55.595030  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:24:56.098682  278987 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1019 16:24:56.593853  278987 kapi.go:107] duration metric: took 2m20.504766879s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1019 16:24:56.595432  278987 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-305823 cluster.
	I1019 16:24:56.596395  278987 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1019 16:24:56.597284  278987 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1019 16:24:56.598187  278987 out.go:179] * Enabled addons: nvidia-device-plugin, ingress-dns, cloud-spanner, registry-creds, metrics-server, storage-provisioner, yakd, amd-gpu-device-plugin, default-storageclass, volumesnapshots, registry, ingress, csi-hostpath-driver, gcp-auth
	I1019 16:24:56.599072  278987 addons.go:515] duration metric: took 2m32.013441518s for enable addons: enabled=[nvidia-device-plugin ingress-dns cloud-spanner registry-creds metrics-server storage-provisioner yakd amd-gpu-device-plugin default-storageclass volumesnapshots registry ingress csi-hostpath-driver gcp-auth]
	I1019 16:24:56.599115  278987 start.go:247] waiting for cluster config update ...
	I1019 16:24:56.599133  278987 start.go:256] writing updated cluster config ...
	I1019 16:24:56.599419  278987 ssh_runner.go:195] Run: rm -f paused
	I1019 16:24:56.605075  278987 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1019 16:24:56.609953  278987 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-dvhs7" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 16:24:56.615101  278987 pod_ready.go:94] pod "coredns-66bc5c9577-dvhs7" is "Ready"
	I1019 16:24:56.615121  278987 pod_ready.go:86] duration metric: took 5.147163ms for pod "coredns-66bc5c9577-dvhs7" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 16:24:56.618261  278987 pod_ready.go:83] waiting for pod "etcd-addons-305823" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 16:24:56.624033  278987 pod_ready.go:94] pod "etcd-addons-305823" is "Ready"
	I1019 16:24:56.624048  278987 pod_ready.go:86] duration metric: took 5.77054ms for pod "etcd-addons-305823" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 16:24:56.626561  278987 pod_ready.go:83] waiting for pod "kube-apiserver-addons-305823" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 16:24:56.631170  278987 pod_ready.go:94] pod "kube-apiserver-addons-305823" is "Ready"
	I1019 16:24:56.631186  278987 pod_ready.go:86] duration metric: took 4.606646ms for pod "kube-apiserver-addons-305823" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 16:24:56.633170  278987 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-305823" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 16:24:57.010817  278987 pod_ready.go:94] pod "kube-controller-manager-addons-305823" is "Ready"
	I1019 16:24:57.010847  278987 pod_ready.go:86] duration metric: took 377.659298ms for pod "kube-controller-manager-addons-305823" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 16:24:57.211568  278987 pod_ready.go:83] waiting for pod "kube-proxy-46rm2" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 16:24:57.608713  278987 pod_ready.go:94] pod "kube-proxy-46rm2" is "Ready"
	I1019 16:24:57.608740  278987 pod_ready.go:86] duration metric: took 397.147992ms for pod "kube-proxy-46rm2" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 16:24:57.809453  278987 pod_ready.go:83] waiting for pod "kube-scheduler-addons-305823" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 16:24:58.209421  278987 pod_ready.go:94] pod "kube-scheduler-addons-305823" is "Ready"
	I1019 16:24:58.209455  278987 pod_ready.go:86] duration metric: took 399.973381ms for pod "kube-scheduler-addons-305823" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 16:24:58.209473  278987 pod_ready.go:40] duration metric: took 1.60436454s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1019 16:24:58.254558  278987 start.go:628] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1019 16:24:58.256072  278987 out.go:179] * Done! kubectl is now configured to use "addons-305823" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 19 16:27:46 addons-305823 crio[819]: time="2025-10-19 16:27:46.400258240Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=9d55a166-c03f-4db1-b64e-9ed4d28a2241 name=/runtime.v1.RuntimeService/ListContainers
	Oct 19 16:27:46 addons-305823 crio[819]: time="2025-10-19 16:27:46.400328685Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=9d55a166-c03f-4db1-b64e-9ed4d28a2241 name=/runtime.v1.RuntimeService/ListContainers
	Oct 19 16:27:46 addons-305823 crio[819]: time="2025-10-19 16:27:46.400749311Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:3f2e229f24b5125e05bb2edf44e3187a17bbb2acfd53e2868e64b09591004826,PodSandboxId:d654e331a6729425d98e0425f4e932e3d5428fc0314425eb13022cf19285a414,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:61e01287e546aac28a3f56839c136b31f590273f3b41187a36f46f6a03bbfe22,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5e7abcdd20216bbeedf1369529564ffd60f830ed3540c477938ca580b645dff5,State:CONTAINER_RUNNING,CreatedAt:1760891122671590370,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7b84d6d2-a870-4484-b316-6000b51924a2,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:014c90fefc5715de532da91d420fb3013ff66803aa370f905f12159d13d738f6,PodSandboxId:20df72d18db3620e2deb30ebb36505b886504563852b539ab3081e1ecf1c9f03,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1760891102642752573,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5e5ea3bb-43a2-4ca1-9b39-8c21a3399b66,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:33128726cf77eed51266a4752ed59c4df846c0ed9f07b351a3737e2100da9241,PodSandboxId:e8ec1661a13454e36e4167954915165513a0663bdc4d7c25330ff1f36fd681b6,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:1b044f6dcac3afbb59e05d98463f1dec6f3d3fb99940bc12ca5d80270358e3bd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c44d76c3213ea875be38abca61688c1173da6ee1815f1ce330a2d93add531e32,State:CONTAINER_RUNNING,CreatedAt:1760891028405902745,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-675c5ddd98-tzg8f,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: ee186d89-829e-41dc-afd7-42f9f6455789,},Annotations:map[string]string{io.kubernetes.
container.hash: 36aef26,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:a22b29bd753e5136ed352830026e02e5b71a365374bc8162c78ade8309d25e30,PodSandboxId:526f695b4a9f3434b6c1364036af2d158eabde1f2211be7fa412940b3e914e84,Metadata:&ContainerMetadata{Name:patch,Attempt:1,},Image:&ImageSpec{Image:08cfe302feafeabe4c2747ba112aa93917a7468cdd19a8835b48eb2ac88a7bf2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:08cfe302feafeabe4c2747ba112aa93917a7468cdd19a8835b48eb2ac88a7bf2,Sta
te:CONTAINER_EXITED,CreatedAt:1760891016651027967,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-sk2gw,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 87e958e2-8b77-4b38-9e60-7ca77fb61288,},Annotations:map[string]string{io.kubernetes.container.hash: 166f2edf,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b9650bd2d814d00d0310019e602c91dd9ed419697ec16ce338102ba12c7f4cc,PodSandboxId:cdd6837e921dd5bc62d2d903716fbd7e92decccbe6df2f8186eb9f307a24312a,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:3d671cf20a35cd94efc5dcd484970779eb21e7938c98fbc3673693b8a117cf39,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:08cfe302feafeabe4c2747ba112aa939
17a7468cdd19a8835b48eb2ac88a7bf2,State:CONTAINER_EXITED,CreatedAt:1760891016431895468,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-tjjvd,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: f5330f85-16fe-4515-bc1d-7a2ff61842fe,},Annotations:map[string]string{io.kubernetes.container.hash: 3193dfde,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f4a9ccbf88ab59add9e7d25b2a808ef7a0c2054a6a694673368fbd34b566a09,PodSandboxId:7b09adcd2266f8a260b3c59ce5ef691f13f7a1950650efd96cff14bf8d9e610c,Metadata:&ContainerMetadata{Name:gadget,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/inspektor-gadget/inspektor-gadget@sha256:db9cb3dd78ffab71eb8746afcb57bd3859993cb150a76d8b7cebe79441c702cb,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38d
ca7434d5f28a7ced293ea76279adbabf08af32ee48a29bab2668b8ea7401f,State:CONTAINER_RUNNING,CreatedAt:1760891013355033716,Labels:map[string]string{io.kubernetes.container.name: gadget,io.kubernetes.pod.name: gadget-75v6k,io.kubernetes.pod.namespace: gadget,io.kubernetes.pod.uid: f3ec496f-83ed-47e3-97c0-28d2e46cfb97,},Annotations:map[string]string{io.kubernetes.container.hash: f68894e6,io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/cleanup\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ce835576c4095a4e338c78cece9a0648872f1db0e93ab1bd59555bc5a4f5ea6c,PodSandboxId:1517874da07006eee03cd04413303fbfc217431d111bc405793b5b0188e4ae68,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e
4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1760891002703281347,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-648f6765c9-l5h9z,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 3a4a0b84-3eb7-4a05-93d5-ff3ff4f08b74,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0531eda3e3136d08727bba475a3f9d3f0cba6bb23877ff4c3b58dffdc64d2e7e,PodSandboxId:40647999f63bf6d509457ea5ea6b93df40ee51a81d2b69cc7a0e600d33c17ab6,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase
/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1760890989887039526,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1d4e6155-2eaf-4ca7-8bcc-2c038d370e02,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ba5a5475c34676f4f82c7dffac0a12cab877781169119a9446e1ccff39843062,PodSandboxId:a01c99121dbf3ea82d0dc3f1076099ff10a1
e86b0ed4e89c39b4e10fe444e86d,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1760890973597887426,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-c8fj2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eb0683d4-291a-49e4-a60e-470700b7a804,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8b555b09a507a78139208843213f8fcf702895b8133b4ef4c3b5916c43bf9eff,PodSandbo
xId:16dc451d6d3087045825e2cef9bc190122dd7dd44931903b50c67073c2a441c9,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1760890952386288287,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 151104a1-9c64-4621-9104-e70f0aba809f,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:543eec372e554caed8f222be1d4f17ff83a006c4834198e02d61139252fd69b3,PodSandboxId:6cd3ece8
174ca3e19a8c88b347766c8d6867b502f97bebc405af68149ca29946,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1760890945925423699,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-dvhs7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 72c5f4f8-21ed-4334-8345-455781b7b29f,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\
":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:44ccf9e8cc0a674165f370256a12f89e329d5c770a57c8d3c43dbb96113353f7,PodSandboxId:80159d0290621a7fdc47aa7f243c2bf27f40e41c174b8cb3d2af76aedf6c9d2c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1760890945176795642,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-46rm2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6e2508c5-37d4-4052-9989-f3fc5bd3258c,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.
kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6cbc411666fd74b787500650e3b2f680323aab6a50abb596526e733665d2fb95,PodSandboxId:f1e9993e1a673729511d4cf235e971bc626db53fcfc07e4a63e159ebc142808d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1760890933775167127,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-305823,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 78587563b8c743d64d4fc5558e6129fa,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.contai
ner.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:94de7f42912b6d061b07c7d5211da38005ada28f1ea960a779ffcfa91fe2f26c,PodSandboxId:10374f284028570ed228f811d123acab583606630e88f83c0f4843d599b96756,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1760890933755569749,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-305823,io.kubernetes.pod.namespace: kube-system,io.kubernetes.po
d.uid: 440dd0f48ed6c0650637df0b21048ded,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:695fd25ae450485ec9a49ac6cf21c3eeae88efa959a601cb51198ebc3ffb177b,PodSandboxId:673ee2b0165fe945027593ddac46096514d9435db5c2e71e33f375ab9c331c83,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1760890933730025259,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name:
etcd-addons-305823,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d12e534005f60d285b98e417f41c8364,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0f1d3ad959b93e8e8e49f4681bc322d2825c6cf1da2034b159d6d0f755652c0b,PodSandboxId:17fb857c17d88d657d9641ef8849439ae4333cd8aa1d59ba674dd1237491d4b7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1760890933719943890,
Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-305823,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 23b7d19ad7631633238b5b69db373fd6,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=9d55a166-c03f-4db1-b64e-9ed4d28a2241 name=/runtime.v1.RuntimeService/ListContainers
	Oct 19 16:27:46 addons-305823 crio[819]: time="2025-10-19 16:27:46.440006560Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=3d924845-f4af-4393-8a84-c6813b7594a8 name=/runtime.v1.RuntimeService/Version
	Oct 19 16:27:46 addons-305823 crio[819]: time="2025-10-19 16:27:46.440073829Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=3d924845-f4af-4393-8a84-c6813b7594a8 name=/runtime.v1.RuntimeService/Version
	Oct 19 16:27:46 addons-305823 crio[819]: time="2025-10-19 16:27:46.441009847Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=6732405c-8487-4df3-9d9c-8d83b18aab24 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 19 16:27:46 addons-305823 crio[819]: time="2025-10-19 16:27:46.442306397Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1760891266442283141,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:598025,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=6732405c-8487-4df3-9d9c-8d83b18aab24 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 19 16:27:46 addons-305823 crio[819]: time="2025-10-19 16:27:46.442948777Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=edaf14e0-a4e6-4ba5-acd6-0fa2f3bccf9a name=/runtime.v1.RuntimeService/ListContainers
	Oct 19 16:27:46 addons-305823 crio[819]: time="2025-10-19 16:27:46.443029818Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=edaf14e0-a4e6-4ba5-acd6-0fa2f3bccf9a name=/runtime.v1.RuntimeService/ListContainers
	Oct 19 16:27:46 addons-305823 crio[819]: time="2025-10-19 16:27:46.443338603Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:3f2e229f24b5125e05bb2edf44e3187a17bbb2acfd53e2868e64b09591004826,PodSandboxId:d654e331a6729425d98e0425f4e932e3d5428fc0314425eb13022cf19285a414,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:61e01287e546aac28a3f56839c136b31f590273f3b41187a36f46f6a03bbfe22,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5e7abcdd20216bbeedf1369529564ffd60f830ed3540c477938ca580b645dff5,State:CONTAINER_RUNNING,CreatedAt:1760891122671590370,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7b84d6d2-a870-4484-b316-6000b51924a2,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:014c90fefc5715de532da91d420fb3013ff66803aa370f905f12159d13d738f6,PodSandboxId:20df72d18db3620e2deb30ebb36505b886504563852b539ab3081e1ecf1c9f03,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1760891102642752573,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5e5ea3bb-43a2-4ca1-9b39-8c21a3399b66,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:33128726cf77eed51266a4752ed59c4df846c0ed9f07b351a3737e2100da9241,PodSandboxId:e8ec1661a13454e36e4167954915165513a0663bdc4d7c25330ff1f36fd681b6,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:1b044f6dcac3afbb59e05d98463f1dec6f3d3fb99940bc12ca5d80270358e3bd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c44d76c3213ea875be38abca61688c1173da6ee1815f1ce330a2d93add531e32,State:CONTAINER_RUNNING,CreatedAt:1760891028405902745,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-675c5ddd98-tzg8f,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: ee186d89-829e-41dc-afd7-42f9f6455789,},Annotations:map[string]string{io.kubernetes.
container.hash: 36aef26,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:a22b29bd753e5136ed352830026e02e5b71a365374bc8162c78ade8309d25e30,PodSandboxId:526f695b4a9f3434b6c1364036af2d158eabde1f2211be7fa412940b3e914e84,Metadata:&ContainerMetadata{Name:patch,Attempt:1,},Image:&ImageSpec{Image:08cfe302feafeabe4c2747ba112aa93917a7468cdd19a8835b48eb2ac88a7bf2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:08cfe302feafeabe4c2747ba112aa93917a7468cdd19a8835b48eb2ac88a7bf2,Sta
te:CONTAINER_EXITED,CreatedAt:1760891016651027967,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-sk2gw,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 87e958e2-8b77-4b38-9e60-7ca77fb61288,},Annotations:map[string]string{io.kubernetes.container.hash: 166f2edf,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b9650bd2d814d00d0310019e602c91dd9ed419697ec16ce338102ba12c7f4cc,PodSandboxId:cdd6837e921dd5bc62d2d903716fbd7e92decccbe6df2f8186eb9f307a24312a,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:3d671cf20a35cd94efc5dcd484970779eb21e7938c98fbc3673693b8a117cf39,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:08cfe302feafeabe4c2747ba112aa939
17a7468cdd19a8835b48eb2ac88a7bf2,State:CONTAINER_EXITED,CreatedAt:1760891016431895468,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-tjjvd,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: f5330f85-16fe-4515-bc1d-7a2ff61842fe,},Annotations:map[string]string{io.kubernetes.container.hash: 3193dfde,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f4a9ccbf88ab59add9e7d25b2a808ef7a0c2054a6a694673368fbd34b566a09,PodSandboxId:7b09adcd2266f8a260b3c59ce5ef691f13f7a1950650efd96cff14bf8d9e610c,Metadata:&ContainerMetadata{Name:gadget,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/inspektor-gadget/inspektor-gadget@sha256:db9cb3dd78ffab71eb8746afcb57bd3859993cb150a76d8b7cebe79441c702cb,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38d
ca7434d5f28a7ced293ea76279adbabf08af32ee48a29bab2668b8ea7401f,State:CONTAINER_RUNNING,CreatedAt:1760891013355033716,Labels:map[string]string{io.kubernetes.container.name: gadget,io.kubernetes.pod.name: gadget-75v6k,io.kubernetes.pod.namespace: gadget,io.kubernetes.pod.uid: f3ec496f-83ed-47e3-97c0-28d2e46cfb97,},Annotations:map[string]string{io.kubernetes.container.hash: f68894e6,io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/cleanup\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ce835576c4095a4e338c78cece9a0648872f1db0e93ab1bd59555bc5a4f5ea6c,PodSandboxId:1517874da07006eee03cd04413303fbfc217431d111bc405793b5b0188e4ae68,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e
4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1760891002703281347,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-648f6765c9-l5h9z,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 3a4a0b84-3eb7-4a05-93d5-ff3ff4f08b74,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0531eda3e3136d08727bba475a3f9d3f0cba6bb23877ff4c3b58dffdc64d2e7e,PodSandboxId:40647999f63bf6d509457ea5ea6b93df40ee51a81d2b69cc7a0e600d33c17ab6,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase
/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1760890989887039526,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1d4e6155-2eaf-4ca7-8bcc-2c038d370e02,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ba5a5475c34676f4f82c7dffac0a12cab877781169119a9446e1ccff39843062,PodSandboxId:a01c99121dbf3ea82d0dc3f1076099ff10a1
e86b0ed4e89c39b4e10fe444e86d,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1760890973597887426,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-c8fj2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eb0683d4-291a-49e4-a60e-470700b7a804,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8b555b09a507a78139208843213f8fcf702895b8133b4ef4c3b5916c43bf9eff,PodSandbo
xId:16dc451d6d3087045825e2cef9bc190122dd7dd44931903b50c67073c2a441c9,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1760890952386288287,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 151104a1-9c64-4621-9104-e70f0aba809f,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:543eec372e554caed8f222be1d4f17ff83a006c4834198e02d61139252fd69b3,PodSandboxId:6cd3ece8
174ca3e19a8c88b347766c8d6867b502f97bebc405af68149ca29946,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1760890945925423699,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-dvhs7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 72c5f4f8-21ed-4334-8345-455781b7b29f,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\
":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:44ccf9e8cc0a674165f370256a12f89e329d5c770a57c8d3c43dbb96113353f7,PodSandboxId:80159d0290621a7fdc47aa7f243c2bf27f40e41c174b8cb3d2af76aedf6c9d2c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1760890945176795642,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-46rm2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6e2508c5-37d4-4052-9989-f3fc5bd3258c,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.
kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6cbc411666fd74b787500650e3b2f680323aab6a50abb596526e733665d2fb95,PodSandboxId:f1e9993e1a673729511d4cf235e971bc626db53fcfc07e4a63e159ebc142808d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1760890933775167127,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-305823,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 78587563b8c743d64d4fc5558e6129fa,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.contai
ner.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:94de7f42912b6d061b07c7d5211da38005ada28f1ea960a779ffcfa91fe2f26c,PodSandboxId:10374f284028570ed228f811d123acab583606630e88f83c0f4843d599b96756,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1760890933755569749,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-305823,io.kubernetes.pod.namespace: kube-system,io.kubernetes.po
d.uid: 440dd0f48ed6c0650637df0b21048ded,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:695fd25ae450485ec9a49ac6cf21c3eeae88efa959a601cb51198ebc3ffb177b,PodSandboxId:673ee2b0165fe945027593ddac46096514d9435db5c2e71e33f375ab9c331c83,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1760890933730025259,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name:
etcd-addons-305823,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d12e534005f60d285b98e417f41c8364,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0f1d3ad959b93e8e8e49f4681bc322d2825c6cf1da2034b159d6d0f755652c0b,PodSandboxId:17fb857c17d88d657d9641ef8849439ae4333cd8aa1d59ba674dd1237491d4b7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1760890933719943890,
Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-305823,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 23b7d19ad7631633238b5b69db373fd6,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=edaf14e0-a4e6-4ba5-acd6-0fa2f3bccf9a name=/runtime.v1.RuntimeService/ListContainers
	Oct 19 16:27:46 addons-305823 crio[819]: time="2025-10-19 16:27:46.477644447Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=96387701-f9ca-4968-af62-ed9d38a78ce3 name=/runtime.v1.RuntimeService/Version
	Oct 19 16:27:46 addons-305823 crio[819]: time="2025-10-19 16:27:46.477744669Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=96387701-f9ca-4968-af62-ed9d38a78ce3 name=/runtime.v1.RuntimeService/Version
	Oct 19 16:27:46 addons-305823 crio[819]: time="2025-10-19 16:27:46.478837775Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=e973d65a-0574-4a1f-bad1-beac501f7000 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 19 16:27:46 addons-305823 crio[819]: time="2025-10-19 16:27:46.480092522Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1760891266480067369,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:598025,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e973d65a-0574-4a1f-bad1-beac501f7000 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 19 16:27:46 addons-305823 crio[819]: time="2025-10-19 16:27:46.480700378Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=586b1e6a-87ca-4871-94bf-ff4af0d7546e name=/runtime.v1.RuntimeService/ListContainers
	Oct 19 16:27:46 addons-305823 crio[819]: time="2025-10-19 16:27:46.480787819Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=586b1e6a-87ca-4871-94bf-ff4af0d7546e name=/runtime.v1.RuntimeService/ListContainers
	Oct 19 16:27:46 addons-305823 crio[819]: time="2025-10-19 16:27:46.481115739Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:3f2e229f24b5125e05bb2edf44e3187a17bbb2acfd53e2868e64b09591004826,PodSandboxId:d654e331a6729425d98e0425f4e932e3d5428fc0314425eb13022cf19285a414,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:61e01287e546aac28a3f56839c136b31f590273f3b41187a36f46f6a03bbfe22,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5e7abcdd20216bbeedf1369529564ffd60f830ed3540c477938ca580b645dff5,State:CONTAINER_RUNNING,CreatedAt:1760891122671590370,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7b84d6d2-a870-4484-b316-6000b51924a2,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:014c90fefc5715de532da91d420fb3013ff66803aa370f905f12159d13d738f6,PodSandboxId:20df72d18db3620e2deb30ebb36505b886504563852b539ab3081e1ecf1c9f03,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1760891102642752573,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5e5ea3bb-43a2-4ca1-9b39-8c21a3399b66,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:33128726cf77eed51266a4752ed59c4df846c0ed9f07b351a3737e2100da9241,PodSandboxId:e8ec1661a13454e36e4167954915165513a0663bdc4d7c25330ff1f36fd681b6,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:1b044f6dcac3afbb59e05d98463f1dec6f3d3fb99940bc12ca5d80270358e3bd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c44d76c3213ea875be38abca61688c1173da6ee1815f1ce330a2d93add531e32,State:CONTAINER_RUNNING,CreatedAt:1760891028405902745,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-675c5ddd98-tzg8f,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: ee186d89-829e-41dc-afd7-42f9f6455789,},Annotations:map[string]string{io.kubernetes.
container.hash: 36aef26,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:a22b29bd753e5136ed352830026e02e5b71a365374bc8162c78ade8309d25e30,PodSandboxId:526f695b4a9f3434b6c1364036af2d158eabde1f2211be7fa412940b3e914e84,Metadata:&ContainerMetadata{Name:patch,Attempt:1,},Image:&ImageSpec{Image:08cfe302feafeabe4c2747ba112aa93917a7468cdd19a8835b48eb2ac88a7bf2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:08cfe302feafeabe4c2747ba112aa93917a7468cdd19a8835b48eb2ac88a7bf2,Sta
te:CONTAINER_EXITED,CreatedAt:1760891016651027967,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-sk2gw,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 87e958e2-8b77-4b38-9e60-7ca77fb61288,},Annotations:map[string]string{io.kubernetes.container.hash: 166f2edf,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b9650bd2d814d00d0310019e602c91dd9ed419697ec16ce338102ba12c7f4cc,PodSandboxId:cdd6837e921dd5bc62d2d903716fbd7e92decccbe6df2f8186eb9f307a24312a,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:3d671cf20a35cd94efc5dcd484970779eb21e7938c98fbc3673693b8a117cf39,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:08cfe302feafeabe4c2747ba112aa939
17a7468cdd19a8835b48eb2ac88a7bf2,State:CONTAINER_EXITED,CreatedAt:1760891016431895468,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-tjjvd,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: f5330f85-16fe-4515-bc1d-7a2ff61842fe,},Annotations:map[string]string{io.kubernetes.container.hash: 3193dfde,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f4a9ccbf88ab59add9e7d25b2a808ef7a0c2054a6a694673368fbd34b566a09,PodSandboxId:7b09adcd2266f8a260b3c59ce5ef691f13f7a1950650efd96cff14bf8d9e610c,Metadata:&ContainerMetadata{Name:gadget,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/inspektor-gadget/inspektor-gadget@sha256:db9cb3dd78ffab71eb8746afcb57bd3859993cb150a76d8b7cebe79441c702cb,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38d
ca7434d5f28a7ced293ea76279adbabf08af32ee48a29bab2668b8ea7401f,State:CONTAINER_RUNNING,CreatedAt:1760891013355033716,Labels:map[string]string{io.kubernetes.container.name: gadget,io.kubernetes.pod.name: gadget-75v6k,io.kubernetes.pod.namespace: gadget,io.kubernetes.pod.uid: f3ec496f-83ed-47e3-97c0-28d2e46cfb97,},Annotations:map[string]string{io.kubernetes.container.hash: f68894e6,io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/cleanup\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ce835576c4095a4e338c78cece9a0648872f1db0e93ab1bd59555bc5a4f5ea6c,PodSandboxId:1517874da07006eee03cd04413303fbfc217431d111bc405793b5b0188e4ae68,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e
4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1760891002703281347,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-648f6765c9-l5h9z,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 3a4a0b84-3eb7-4a05-93d5-ff3ff4f08b74,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0531eda3e3136d08727bba475a3f9d3f0cba6bb23877ff4c3b58dffdc64d2e7e,PodSandboxId:40647999f63bf6d509457ea5ea6b93df40ee51a81d2b69cc7a0e600d33c17ab6,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase
/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1760890989887039526,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1d4e6155-2eaf-4ca7-8bcc-2c038d370e02,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ba5a5475c34676f4f82c7dffac0a12cab877781169119a9446e1ccff39843062,PodSandboxId:a01c99121dbf3ea82d0dc3f1076099ff10a1
e86b0ed4e89c39b4e10fe444e86d,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1760890973597887426,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-c8fj2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eb0683d4-291a-49e4-a60e-470700b7a804,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8b555b09a507a78139208843213f8fcf702895b8133b4ef4c3b5916c43bf9eff,PodSandbo
xId:16dc451d6d3087045825e2cef9bc190122dd7dd44931903b50c67073c2a441c9,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1760890952386288287,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 151104a1-9c64-4621-9104-e70f0aba809f,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:543eec372e554caed8f222be1d4f17ff83a006c4834198e02d61139252fd69b3,PodSandboxId:6cd3ece8
174ca3e19a8c88b347766c8d6867b502f97bebc405af68149ca29946,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1760890945925423699,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-dvhs7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 72c5f4f8-21ed-4334-8345-455781b7b29f,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\
":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:44ccf9e8cc0a674165f370256a12f89e329d5c770a57c8d3c43dbb96113353f7,PodSandboxId:80159d0290621a7fdc47aa7f243c2bf27f40e41c174b8cb3d2af76aedf6c9d2c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1760890945176795642,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-46rm2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6e2508c5-37d4-4052-9989-f3fc5bd3258c,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.
kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6cbc411666fd74b787500650e3b2f680323aab6a50abb596526e733665d2fb95,PodSandboxId:f1e9993e1a673729511d4cf235e971bc626db53fcfc07e4a63e159ebc142808d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1760890933775167127,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-305823,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 78587563b8c743d64d4fc5558e6129fa,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.contai
ner.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:94de7f42912b6d061b07c7d5211da38005ada28f1ea960a779ffcfa91fe2f26c,PodSandboxId:10374f284028570ed228f811d123acab583606630e88f83c0f4843d599b96756,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1760890933755569749,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-305823,io.kubernetes.pod.namespace: kube-system,io.kubernetes.po
d.uid: 440dd0f48ed6c0650637df0b21048ded,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:695fd25ae450485ec9a49ac6cf21c3eeae88efa959a601cb51198ebc3ffb177b,PodSandboxId:673ee2b0165fe945027593ddac46096514d9435db5c2e71e33f375ab9c331c83,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1760890933730025259,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name:
etcd-addons-305823,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d12e534005f60d285b98e417f41c8364,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0f1d3ad959b93e8e8e49f4681bc322d2825c6cf1da2034b159d6d0f755652c0b,PodSandboxId:17fb857c17d88d657d9641ef8849439ae4333cd8aa1d59ba674dd1237491d4b7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1760890933719943890,
Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-305823,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 23b7d19ad7631633238b5b69db373fd6,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=586b1e6a-87ca-4871-94bf-ff4af0d7546e name=/runtime.v1.RuntimeService/ListContainers
	Oct 19 16:27:46 addons-305823 crio[819]: time="2025-10-19 16:27:46.507973023Z" level=debug msg="Request: &ExecSyncRequest{ContainerId:3f4a9ccbf88ab59add9e7d25b2a808ef7a0c2054a6a694673368fbd34b566a09,Cmd:[/bin/gadgettracermanager -liveness],Timeout:2,}" file="otel-collector/interceptors.go:62" id=a73d3ee2-c5bc-4ff0-b42a-7567a34f7d63 name=/runtime.v1.RuntimeService/ExecSync
	Oct 19 16:27:46 addons-305823 crio[819]: time="2025-10-19 16:27:46.526481366Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=f9e00bf2-839e-416f-b10e-4a1c78b62b3b name=/runtime.v1.RuntimeService/Version
	Oct 19 16:27:46 addons-305823 crio[819]: time="2025-10-19 16:27:46.526931571Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=f9e00bf2-839e-416f-b10e-4a1c78b62b3b name=/runtime.v1.RuntimeService/Version
	Oct 19 16:27:46 addons-305823 crio[819]: time="2025-10-19 16:27:46.528611221Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=58598cbd-51d2-4a99-9650-f619ab606327 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 19 16:27:46 addons-305823 crio[819]: time="2025-10-19 16:27:46.530324178Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1760891266530299441,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:598025,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=58598cbd-51d2-4a99-9650-f619ab606327 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 19 16:27:46 addons-305823 crio[819]: time="2025-10-19 16:27:46.530946153Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ac9d7797-f69c-46cd-839e-dd83efe2f583 name=/runtime.v1.RuntimeService/ListContainers
	Oct 19 16:27:46 addons-305823 crio[819]: time="2025-10-19 16:27:46.531022719Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ac9d7797-f69c-46cd-839e-dd83efe2f583 name=/runtime.v1.RuntimeService/ListContainers
	Oct 19 16:27:46 addons-305823 crio[819]: time="2025-10-19 16:27:46.531323493Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:3f2e229f24b5125e05bb2edf44e3187a17bbb2acfd53e2868e64b09591004826,PodSandboxId:d654e331a6729425d98e0425f4e932e3d5428fc0314425eb13022cf19285a414,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:61e01287e546aac28a3f56839c136b31f590273f3b41187a36f46f6a03bbfe22,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5e7abcdd20216bbeedf1369529564ffd60f830ed3540c477938ca580b645dff5,State:CONTAINER_RUNNING,CreatedAt:1760891122671590370,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7b84d6d2-a870-4484-b316-6000b51924a2,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:014c90fefc5715de532da91d420fb3013ff66803aa370f905f12159d13d738f6,PodSandboxId:20df72d18db3620e2deb30ebb36505b886504563852b539ab3081e1ecf1c9f03,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1760891102642752573,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5e5ea3bb-43a2-4ca1-9b39-8c21a3399b66,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:33128726cf77eed51266a4752ed59c4df846c0ed9f07b351a3737e2100da9241,PodSandboxId:e8ec1661a13454e36e4167954915165513a0663bdc4d7c25330ff1f36fd681b6,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:1b044f6dcac3afbb59e05d98463f1dec6f3d3fb99940bc12ca5d80270358e3bd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c44d76c3213ea875be38abca61688c1173da6ee1815f1ce330a2d93add531e32,State:CONTAINER_RUNNING,CreatedAt:1760891028405902745,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-675c5ddd98-tzg8f,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: ee186d89-829e-41dc-afd7-42f9f6455789,},Annotations:map[string]string{io.kubernetes.
container.hash: 36aef26,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:a22b29bd753e5136ed352830026e02e5b71a365374bc8162c78ade8309d25e30,PodSandboxId:526f695b4a9f3434b6c1364036af2d158eabde1f2211be7fa412940b3e914e84,Metadata:&ContainerMetadata{Name:patch,Attempt:1,},Image:&ImageSpec{Image:08cfe302feafeabe4c2747ba112aa93917a7468cdd19a8835b48eb2ac88a7bf2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:08cfe302feafeabe4c2747ba112aa93917a7468cdd19a8835b48eb2ac88a7bf2,Sta
te:CONTAINER_EXITED,CreatedAt:1760891016651027967,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-sk2gw,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 87e958e2-8b77-4b38-9e60-7ca77fb61288,},Annotations:map[string]string{io.kubernetes.container.hash: 166f2edf,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b9650bd2d814d00d0310019e602c91dd9ed419697ec16ce338102ba12c7f4cc,PodSandboxId:cdd6837e921dd5bc62d2d903716fbd7e92decccbe6df2f8186eb9f307a24312a,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:3d671cf20a35cd94efc5dcd484970779eb21e7938c98fbc3673693b8a117cf39,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:08cfe302feafeabe4c2747ba112aa939
17a7468cdd19a8835b48eb2ac88a7bf2,State:CONTAINER_EXITED,CreatedAt:1760891016431895468,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-tjjvd,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: f5330f85-16fe-4515-bc1d-7a2ff61842fe,},Annotations:map[string]string{io.kubernetes.container.hash: 3193dfde,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f4a9ccbf88ab59add9e7d25b2a808ef7a0c2054a6a694673368fbd34b566a09,PodSandboxId:7b09adcd2266f8a260b3c59ce5ef691f13f7a1950650efd96cff14bf8d9e610c,Metadata:&ContainerMetadata{Name:gadget,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/inspektor-gadget/inspektor-gadget@sha256:db9cb3dd78ffab71eb8746afcb57bd3859993cb150a76d8b7cebe79441c702cb,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38d
ca7434d5f28a7ced293ea76279adbabf08af32ee48a29bab2668b8ea7401f,State:CONTAINER_RUNNING,CreatedAt:1760891013355033716,Labels:map[string]string{io.kubernetes.container.name: gadget,io.kubernetes.pod.name: gadget-75v6k,io.kubernetes.pod.namespace: gadget,io.kubernetes.pod.uid: f3ec496f-83ed-47e3-97c0-28d2e46cfb97,},Annotations:map[string]string{io.kubernetes.container.hash: f68894e6,io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/cleanup\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ce835576c4095a4e338c78cece9a0648872f1db0e93ab1bd59555bc5a4f5ea6c,PodSandboxId:1517874da07006eee03cd04413303fbfc217431d111bc405793b5b0188e4ae68,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e
4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1760891002703281347,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-648f6765c9-l5h9z,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 3a4a0b84-3eb7-4a05-93d5-ff3ff4f08b74,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0531eda3e3136d08727bba475a3f9d3f0cba6bb23877ff4c3b58dffdc64d2e7e,PodSandboxId:40647999f63bf6d509457ea5ea6b93df40ee51a81d2b69cc7a0e600d33c17ab6,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase
/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1760890989887039526,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1d4e6155-2eaf-4ca7-8bcc-2c038d370e02,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ba5a5475c34676f4f82c7dffac0a12cab877781169119a9446e1ccff39843062,PodSandboxId:a01c99121dbf3ea82d0dc3f1076099ff10a1
e86b0ed4e89c39b4e10fe444e86d,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1760890973597887426,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-c8fj2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eb0683d4-291a-49e4-a60e-470700b7a804,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8b555b09a507a78139208843213f8fcf702895b8133b4ef4c3b5916c43bf9eff,PodSandbo
xId:16dc451d6d3087045825e2cef9bc190122dd7dd44931903b50c67073c2a441c9,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1760890952386288287,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 151104a1-9c64-4621-9104-e70f0aba809f,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:543eec372e554caed8f222be1d4f17ff83a006c4834198e02d61139252fd69b3,PodSandboxId:6cd3ece8
174ca3e19a8c88b347766c8d6867b502f97bebc405af68149ca29946,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1760890945925423699,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-dvhs7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 72c5f4f8-21ed-4334-8345-455781b7b29f,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\
":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:44ccf9e8cc0a674165f370256a12f89e329d5c770a57c8d3c43dbb96113353f7,PodSandboxId:80159d0290621a7fdc47aa7f243c2bf27f40e41c174b8cb3d2af76aedf6c9d2c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1760890945176795642,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-46rm2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6e2508c5-37d4-4052-9989-f3fc5bd3258c,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.
kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6cbc411666fd74b787500650e3b2f680323aab6a50abb596526e733665d2fb95,PodSandboxId:f1e9993e1a673729511d4cf235e971bc626db53fcfc07e4a63e159ebc142808d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1760890933775167127,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-305823,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 78587563b8c743d64d4fc5558e6129fa,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.contai
ner.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:94de7f42912b6d061b07c7d5211da38005ada28f1ea960a779ffcfa91fe2f26c,PodSandboxId:10374f284028570ed228f811d123acab583606630e88f83c0f4843d599b96756,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1760890933755569749,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-305823,io.kubernetes.pod.namespace: kube-system,io.kubernetes.po
d.uid: 440dd0f48ed6c0650637df0b21048ded,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:695fd25ae450485ec9a49ac6cf21c3eeae88efa959a601cb51198ebc3ffb177b,PodSandboxId:673ee2b0165fe945027593ddac46096514d9435db5c2e71e33f375ab9c331c83,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1760890933730025259,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name:
etcd-addons-305823,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d12e534005f60d285b98e417f41c8364,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0f1d3ad959b93e8e8e49f4681bc322d2825c6cf1da2034b159d6d0f755652c0b,PodSandboxId:17fb857c17d88d657d9641ef8849439ae4333cd8aa1d59ba674dd1237491d4b7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1760890933719943890,
Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-305823,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 23b7d19ad7631633238b5b69db373fd6,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ac9d7797-f69c-46cd-839e-dd83efe2f583 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	3f2e229f24b51       docker.io/library/nginx@sha256:61e01287e546aac28a3f56839c136b31f590273f3b41187a36f46f6a03bbfe22                              2 minutes ago       Running             nginx                     0                   d654e331a6729       nginx
	014c90fefc571       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                          2 minutes ago       Running             busybox                   0                   20df72d18db36       busybox
	33128726cf77e       registry.k8s.io/ingress-nginx/controller@sha256:1b044f6dcac3afbb59e05d98463f1dec6f3d3fb99940bc12ca5d80270358e3bd             3 minutes ago       Running             controller                0                   e8ec1661a1345       ingress-nginx-controller-675c5ddd98-tzg8f
	a22b29bd753e5       08cfe302feafeabe4c2747ba112aa93917a7468cdd19a8835b48eb2ac88a7bf2                                                             4 minutes ago       Exited              patch                     1                   526f695b4a9f3       ingress-nginx-admission-patch-sk2gw
	9b9650bd2d814       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:3d671cf20a35cd94efc5dcd484970779eb21e7938c98fbc3673693b8a117cf39   4 minutes ago       Exited              create                    0                   cdd6837e921dd       ingress-nginx-admission-create-tjjvd
	3f4a9ccbf88ab       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:db9cb3dd78ffab71eb8746afcb57bd3859993cb150a76d8b7cebe79441c702cb            4 minutes ago       Running             gadget                    0                   7b09adcd2266f       gadget-75v6k
	ce835576c4095       docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef             4 minutes ago       Running             local-path-provisioner    0                   1517874da0700       local-path-provisioner-648f6765c9-l5h9z
	0531eda3e3136       docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7               4 minutes ago       Running             minikube-ingress-dns      0                   40647999f63bf       kube-ingress-dns-minikube
	ba5a5475c3467       docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f                     4 minutes ago       Running             amd-gpu-device-plugin     0                   a01c99121dbf3       amd-gpu-device-plugin-c8fj2
	8b555b09a507a       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                             5 minutes ago       Running             storage-provisioner       0                   16dc451d6d308       storage-provisioner
	543eec372e554       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                             5 minutes ago       Running             coredns                   0                   6cd3ece8174ca       coredns-66bc5c9577-dvhs7
	44ccf9e8cc0a6       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                                             5 minutes ago       Running             kube-proxy                0                   80159d0290621       kube-proxy-46rm2
	6cbc411666fd7       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                                             5 minutes ago       Running             kube-apiserver            0                   f1e9993e1a673       kube-apiserver-addons-305823
	94de7f42912b6       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                                             5 minutes ago       Running             kube-controller-manager   0                   10374f2840285       kube-controller-manager-addons-305823
	695fd25ae4504       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                                             5 minutes ago       Running             etcd                      0                   673ee2b0165fe       etcd-addons-305823
	0f1d3ad959b93       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                                             5 minutes ago       Running             kube-scheduler            0                   17fb857c17d88       kube-scheduler-addons-305823
	
	
	==> coredns [543eec372e554caed8f222be1d4f17ff83a006c4834198e02d61139252fd69b3] <==
	[INFO] 10.244.0.8:49109 - 12629 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 109 false 1232" NXDOMAIN qr,aa,rd 179 0.00016546s
	[INFO] 10.244.0.8:49109 - 33124 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000152597s
	[INFO] 10.244.0.8:49109 - 62703 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.00137096s
	[INFO] 10.244.0.8:49109 - 46747 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000117399s
	[INFO] 10.244.0.8:49109 - 40724 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000203861s
	[INFO] 10.244.0.8:49109 - 15598 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.00009125s
	[INFO] 10.244.0.8:49109 - 16557 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.00017907s
	[INFO] 10.244.0.8:50485 - 46412 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000576589s
	[INFO] 10.244.0.8:50485 - 46705 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000721644s
	[INFO] 10.244.0.8:35905 - 10641 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000186564s
	[INFO] 10.244.0.8:35905 - 10412 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000758722s
	[INFO] 10.244.0.8:38672 - 46930 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000085251s
	[INFO] 10.244.0.8:38672 - 47203 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000155563s
	[INFO] 10.244.0.8:41191 - 22841 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000077426s
	[INFO] 10.244.0.8:41191 - 23290 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000163883s
	[INFO] 10.244.0.23:40636 - 7679 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.002688264s
	[INFO] 10.244.0.23:35239 - 49063 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.001676097s
	[INFO] 10.244.0.23:43494 - 45391 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000147732s
	[INFO] 10.244.0.23:48702 - 49760 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000286825s
	[INFO] 10.244.0.23:41487 - 30673 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000163043s
	[INFO] 10.244.0.23:41559 - 46677 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000143426s
	[INFO] 10.244.0.23:42224 - 54483 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 458 0.00225993s
	[INFO] 10.244.0.23:35008 - 44234 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.003277272s
	[INFO] 10.244.0.26:46345 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000249671s
	[INFO] 10.244.0.26:52684 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000152858s
	
	
	==> describe nodes <==
	Name:               addons-305823
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-305823
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e20882874ea1ac33516421f13ca0f7def6fb6b34
	                    minikube.k8s.io/name=addons-305823
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_19T16_22_19_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-305823
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 19 Oct 2025 16:22:16 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-305823
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 19 Oct 2025 16:27:45 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 19 Oct 2025 16:26:24 +0000   Sun, 19 Oct 2025 16:22:14 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 19 Oct 2025 16:26:24 +0000   Sun, 19 Oct 2025 16:22:14 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 19 Oct 2025 16:26:24 +0000   Sun, 19 Oct 2025 16:22:14 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 19 Oct 2025 16:26:24 +0000   Sun, 19 Oct 2025 16:22:19 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.11
	  Hostname:    addons-305823
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4008588Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4008588Ki
	  pods:               110
	System Info:
	  Machine ID:                 f263ad2ec691493190469032f0718877
	  System UUID:                f263ad2e-c691-4931-9046-9032f0718877
	  Boot ID:                    73f3f6cb-d997-4d3a-a317-1a35889aec7a
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (15 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m48s
	  default                     hello-world-app-5d498dc89-tqklq              0 (0%)        0 (0%)      0 (0%)           0 (0%)         1s
	  default                     nginx                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m29s
	  gadget                      gadget-75v6k                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m15s
	  ingress-nginx               ingress-nginx-controller-675c5ddd98-tzg8f    100m (5%)     0 (0%)      90Mi (2%)        0 (0%)         5m14s
	  kube-system                 amd-gpu-device-plugin-c8fj2                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m19s
	  kube-system                 coredns-66bc5c9577-dvhs7                     100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     5m22s
	  kube-system                 etcd-addons-305823                           100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         5m27s
	  kube-system                 kube-apiserver-addons-305823                 250m (12%)    0 (0%)      0 (0%)           0 (0%)         5m27s
	  kube-system                 kube-controller-manager-addons-305823        200m (10%)    0 (0%)      0 (0%)           0 (0%)         5m27s
	  kube-system                 kube-ingress-dns-minikube                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m17s
	  kube-system                 kube-proxy-46rm2                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m22s
	  kube-system                 kube-scheduler-addons-305823                 100m (5%)     0 (0%)      0 (0%)           0 (0%)         5m29s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m16s
	  local-path-storage          local-path-provisioner-648f6765c9-l5h9z      0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m16s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  0 (0%)
	  memory             260Mi (6%)  170Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m20s                  kube-proxy       
	  Normal  Starting                 5m34s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  5m34s (x8 over 5m34s)  kubelet          Node addons-305823 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m34s (x8 over 5m34s)  kubelet          Node addons-305823 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m34s (x7 over 5m34s)  kubelet          Node addons-305823 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m34s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 5m28s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  5m28s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  5m27s                  kubelet          Node addons-305823 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m27s                  kubelet          Node addons-305823 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m27s                  kubelet          Node addons-305823 status is now: NodeHasSufficientPID
	  Normal  NodeReady                5m27s                  kubelet          Node addons-305823 status is now: NodeReady
	  Normal  RegisteredNode           5m23s                  node-controller  Node addons-305823 event: Registered Node addons-305823 in Controller
	
	
	==> dmesg <==
	[  +0.147698] kauditd_printk_skb: 413 callbacks suppressed
	[  +1.694235] kauditd_printk_skb: 234 callbacks suppressed
	[  +9.171607] kauditd_printk_skb: 20 callbacks suppressed
	[Oct19 16:23] kauditd_printk_skb: 17 callbacks suppressed
	[  +8.216928] kauditd_printk_skb: 26 callbacks suppressed
	[ +10.194307] kauditd_printk_skb: 47 callbacks suppressed
	[  +5.063873] kauditd_printk_skb: 11 callbacks suppressed
	[  +0.975653] kauditd_printk_skb: 153 callbacks suppressed
	[  +5.939531] kauditd_printk_skb: 93 callbacks suppressed
	[  +4.077183] kauditd_printk_skb: 67 callbacks suppressed
	[  +1.068551] kauditd_printk_skb: 11 callbacks suppressed
	[Oct19 16:24] kauditd_printk_skb: 41 callbacks suppressed
	[  +0.000028] kauditd_printk_skb: 47 callbacks suppressed
	[Oct19 16:25] kauditd_printk_skb: 41 callbacks suppressed
	[  +5.560715] kauditd_printk_skb: 22 callbacks suppressed
	[  +0.680103] kauditd_printk_skb: 53 callbacks suppressed
	[  +0.110505] kauditd_printk_skb: 17 callbacks suppressed
	[  +2.218630] kauditd_printk_skb: 124 callbacks suppressed
	[  +5.854624] kauditd_printk_skb: 80 callbacks suppressed
	[  +2.363662] kauditd_printk_skb: 93 callbacks suppressed
	[  +1.707097] kauditd_printk_skb: 96 callbacks suppressed
	[  +1.587895] kauditd_printk_skb: 114 callbacks suppressed
	[Oct19 16:26] kauditd_printk_skb: 64 callbacks suppressed
	[  +0.000076] kauditd_printk_skb: 112 callbacks suppressed
	[Oct19 16:27] kauditd_printk_skb: 37 callbacks suppressed
	
	
	==> etcd [695fd25ae450485ec9a49ac6cf21c3eeae88efa959a601cb51198ebc3ffb177b] <==
	{"level":"warn","ts":"2025-10-19T16:25:28.104928Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"178.317149ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/default/default\" limit:1 ","response":"range_response_count:1 size:171"}
	{"level":"info","ts":"2025-10-19T16:25:28.104998Z","caller":"traceutil/trace.go:172","msg":"trace[1611937006] range","detail":"{range_begin:/registry/serviceaccounts/default/default; range_end:; response_count:1; response_revision:1482; }","duration":"178.402776ms","start":"2025-10-19T16:25:27.926586Z","end":"2025-10-19T16:25:28.104989Z","steps":["trace[1611937006] 'agreement among raft nodes before linearized reading'  (duration: 176.271145ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-19T16:25:28.105455Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"173.970475ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/masterleases/192.168.39.11\" limit:1 ","response":"range_response_count:1 size:133"}
	{"level":"info","ts":"2025-10-19T16:25:28.105563Z","caller":"traceutil/trace.go:172","msg":"trace[1120673357] range","detail":"{range_begin:/registry/masterleases/192.168.39.11; range_end:; response_count:1; response_revision:1483; }","duration":"174.083687ms","start":"2025-10-19T16:25:27.931470Z","end":"2025-10-19T16:25:28.105554Z","steps":["trace[1120673357] 'agreement among raft nodes before linearized reading'  (duration: 173.913119ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-19T16:25:28.105762Z","caller":"traceutil/trace.go:172","msg":"trace[2107646964] transaction","detail":"{read_only:false; response_revision:1483; number_of_response:1; }","duration":"192.643643ms","start":"2025-10-19T16:25:27.913111Z","end":"2025-10-19T16:25:28.105754Z","steps":["trace[2107646964] 'process raft request'  (duration: 189.747506ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-19T16:25:28.105852Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"103.215796ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-19T16:25:28.105867Z","caller":"traceutil/trace.go:172","msg":"trace[69019337] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1483; }","duration":"103.237598ms","start":"2025-10-19T16:25:28.002626Z","end":"2025-10-19T16:25:28.105863Z","steps":["trace[69019337] 'agreement among raft nodes before linearized reading'  (duration: 103.201704ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-19T16:25:28.105963Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"149.093836ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/persistentvolumeclaims/default/hpvc\" limit:1 ","response":"range_response_count:1 size:1434"}
	{"level":"info","ts":"2025-10-19T16:25:28.105975Z","caller":"traceutil/trace.go:172","msg":"trace[1610101886] range","detail":"{range_begin:/registry/persistentvolumeclaims/default/hpvc; range_end:; response_count:1; response_revision:1483; }","duration":"149.108664ms","start":"2025-10-19T16:25:27.956863Z","end":"2025-10-19T16:25:28.105971Z","steps":["trace[1610101886] 'agreement among raft nodes before linearized reading'  (duration: 149.050202ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-19T16:25:35.884593Z","caller":"traceutil/trace.go:172","msg":"trace[499660902] transaction","detail":"{read_only:false; response_revision:1558; number_of_response:1; }","duration":"124.429203ms","start":"2025-10-19T16:25:35.760153Z","end":"2025-10-19T16:25:35.884582Z","steps":["trace[499660902] 'process raft request'  (duration: 124.119754ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-19T16:25:46.741390Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"163.30258ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 serializable:true keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-19T16:25:46.741494Z","caller":"traceutil/trace.go:172","msg":"trace[944872329] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1640; }","duration":"163.415775ms","start":"2025-10-19T16:25:46.578068Z","end":"2025-10-19T16:25:46.741484Z","steps":["trace[944872329] 'range keys from in-memory index tree'  (duration: 163.264966ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-19T16:25:46.741725Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"160.060035ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-19T16:25:46.741749Z","caller":"traceutil/trace.go:172","msg":"trace[1053214913] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1640; }","duration":"160.087588ms","start":"2025-10-19T16:25:46.581653Z","end":"2025-10-19T16:25:46.741740Z","steps":["trace[1053214913] 'range keys from in-memory index tree'  (duration: 159.999267ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-19T16:25:46.741897Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"404.420509ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/events/yakd-dashboard/\" range_end:\"/registry/events/yakd-dashboard0\" limit:10000 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-19T16:25:46.741931Z","caller":"traceutil/trace.go:172","msg":"trace[1571145450] range","detail":"{range_begin:/registry/events/yakd-dashboard/; range_end:/registry/events/yakd-dashboard0; response_count:0; response_revision:1640; }","duration":"404.461485ms","start":"2025-10-19T16:25:46.337462Z","end":"2025-10-19T16:25:46.741923Z","steps":["trace[1571145450] 'range keys from in-memory index tree'  (duration: 404.385602ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-19T16:25:46.741951Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-10-19T16:25:46.337455Z","time spent":"404.489936ms","remote":"127.0.0.1:42194","response type":"/etcdserverpb.KV/Range","request count":0,"request size":71,"response count":0,"response size":28,"request content":"key:\"/registry/events/yakd-dashboard/\" range_end:\"/registry/events/yakd-dashboard0\" limit:10000 "}
	{"level":"warn","ts":"2025-10-19T16:25:46.741970Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"347.686587ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 serializable:true keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-19T16:25:46.741997Z","caller":"traceutil/trace.go:172","msg":"trace[1212670207] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1640; }","duration":"347.715189ms","start":"2025-10-19T16:25:46.394277Z","end":"2025-10-19T16:25:46.741992Z","steps":["trace[1212670207] 'range keys from in-memory index tree'  (duration: 347.659907ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-19T16:25:59.460293Z","caller":"traceutil/trace.go:172","msg":"trace[1243719561] linearizableReadLoop","detail":"{readStateIndex:1822; appliedIndex:1822; }","duration":"120.196678ms","start":"2025-10-19T16:25:59.340075Z","end":"2025-10-19T16:25:59.460272Z","steps":["trace[1243719561] 'read index received'  (duration: 120.191685ms)","trace[1243719561] 'applied index is now lower than readState.Index'  (duration: 4.315µs)"],"step_count":2}
	{"level":"warn","ts":"2025-10-19T16:25:59.460468Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"120.387549ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" limit:1 ","response":"range_response_count:1 size:1113"}
	{"level":"info","ts":"2025-10-19T16:25:59.460489Z","caller":"traceutil/trace.go:172","msg":"trace[1173505065] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:1751; }","duration":"120.426722ms","start":"2025-10-19T16:25:59.340056Z","end":"2025-10-19T16:25:59.460483Z","steps":["trace[1173505065] 'agreement among raft nodes before linearized reading'  (duration: 120.315569ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-19T16:25:59.460801Z","caller":"traceutil/trace.go:172","msg":"trace[688351330] transaction","detail":"{read_only:false; response_revision:1752; number_of_response:1; }","duration":"178.741982ms","start":"2025-10-19T16:25:59.282049Z","end":"2025-10-19T16:25:59.460791Z","steps":["trace[688351330] 'process raft request'  (duration: 178.256456ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-19T16:26:00.370938Z","caller":"traceutil/trace.go:172","msg":"trace[1618253961] transaction","detail":"{read_only:false; response_revision:1755; number_of_response:1; }","duration":"232.597257ms","start":"2025-10-19T16:26:00.138323Z","end":"2025-10-19T16:26:00.370921Z","steps":["trace[1618253961] 'process raft request'  (duration: 232.517713ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-19T16:26:15.744893Z","caller":"traceutil/trace.go:172","msg":"trace[645005542] transaction","detail":"{read_only:false; response_revision:1932; number_of_response:1; }","duration":"167.248943ms","start":"2025-10-19T16:26:15.577631Z","end":"2025-10-19T16:26:15.744880Z","steps":["trace[645005542] 'process raft request'  (duration: 166.143633ms)"],"step_count":1}
	
	
	==> kernel <==
	 16:27:46 up 5 min,  0 users,  load average: 0.46, 0.85, 0.48
	Linux addons-305823 6.6.95 #1 SMP PREEMPT_DYNAMIC Thu Oct 16 13:22:30 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [6cbc411666fd74b787500650e3b2f680323aab6a50abb596526e733665d2fb95] <==
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E1019 16:23:21.188959       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.96.79.77:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.96.79.77:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.96.79.77:443: connect: connection refused" logger="UnhandledError"
	E1019 16:23:21.190964       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.96.79.77:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.96.79.77:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.96.79.77:443: connect: connection refused" logger="UnhandledError"
	E1019 16:23:21.195752       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.96.79.77:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.96.79.77:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.96.79.77:443: connect: connection refused" logger="UnhandledError"
	I1019 16:23:21.259474       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1019 16:25:09.004001       1 conn.go:339] Error on socket receive: read tcp 192.168.39.11:8443->192.168.39.1:34484: use of closed network connection
	E1019 16:25:09.195622       1 conn.go:339] Error on socket receive: read tcp 192.168.39.11:8443->192.168.39.1:34496: use of closed network connection
	I1019 16:25:17.808938       1 controller.go:667] quota admission added evaluator for: ingresses.networking.k8s.io
	I1019 16:25:17.993449       1 alloc.go:328] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.107.227.7"}
	I1019 16:25:44.936640       1 controller.go:667] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I1019 16:25:54.481787       1 alloc.go:328] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.104.91.194"}
	I1019 16:26:10.301854       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1019 16:26:10.303594       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1019 16:26:10.338672       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1019 16:26:10.338758       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1019 16:26:10.362572       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1019 16:26:10.362602       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1019 16:26:10.433939       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1019 16:26:10.433982       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W1019 16:26:11.333643       1 cacher.go:182] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W1019 16:26:11.434757       1 cacher.go:182] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W1019 16:26:11.599169       1 cacher.go:182] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I1019 16:26:22.208391       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	I1019 16:27:45.319243       1 alloc.go:328] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.109.60.168"}
	
	
	==> kube-controller-manager [94de7f42912b6d061b07c7d5211da38005ada28f1ea960a779ffcfa91fe2f26c] <==
	E1019 16:26:19.089947       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1019 16:26:19.993897       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1019 16:26:19.994927       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	I1019 16:26:23.560208       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I1019 16:26:23.560260       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1019 16:26:23.595234       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1019 16:26:23.595275       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	E1019 16:26:25.816172       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1019 16:26:25.817355       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1019 16:26:26.007407       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1019 16:26:26.008419       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1019 16:26:29.336134       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1019 16:26:29.337231       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1019 16:26:44.617607       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1019 16:26:44.619348       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1019 16:26:45.418430       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1019 16:26:45.419416       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1019 16:26:47.260434       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1019 16:26:47.261368       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1019 16:27:18.180836       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1019 16:27:18.182089       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1019 16:27:31.359667       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1019 16:27:31.360575       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1019 16:27:32.295089       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1019 16:27:32.296171       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	
	
	==> kube-proxy [44ccf9e8cc0a674165f370256a12f89e329d5c770a57c8d3c43dbb96113353f7] <==
	I1019 16:22:25.957012       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1019 16:22:26.101397       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1019 16:22:26.104590       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.11"]
	E1019 16:22:26.104689       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1019 16:22:26.331645       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1019 16:22:26.331729       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1019 16:22:26.331749       1 server_linux.go:132] "Using iptables Proxier"
	I1019 16:22:26.357184       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1019 16:22:26.357452       1 server.go:527] "Version info" version="v1.34.1"
	I1019 16:22:26.357479       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1019 16:22:26.365960       1 config.go:200] "Starting service config controller"
	I1019 16:22:26.365986       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1019 16:22:26.366004       1 config.go:106] "Starting endpoint slice config controller"
	I1019 16:22:26.366007       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1019 16:22:26.366028       1 config.go:403] "Starting serviceCIDR config controller"
	I1019 16:22:26.366046       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1019 16:22:26.366646       1 config.go:309] "Starting node config controller"
	I1019 16:22:26.366668       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1019 16:22:26.366674       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1019 16:22:26.469691       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1019 16:22:26.469754       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1019 16:22:26.475112       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [0f1d3ad959b93e8e8e49f4681bc322d2825c6cf1da2034b159d6d0f755652c0b] <==
	E1019 16:22:16.336020       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1019 16:22:16.336158       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1019 16:22:16.336163       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1019 16:22:16.336240       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1019 16:22:16.336269       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1019 16:22:16.336576       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1019 16:22:16.336732       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1019 16:22:16.336729       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1019 16:22:16.336817       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1019 16:22:16.336869       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1019 16:22:17.151617       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1019 16:22:17.153449       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1019 16:22:17.269214       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1019 16:22:17.287760       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1019 16:22:17.323322       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1019 16:22:17.347662       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1019 16:22:17.369492       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1019 16:22:17.402460       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1019 16:22:17.415554       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1019 16:22:17.435296       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1019 16:22:17.451474       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1019 16:22:17.452719       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1019 16:22:17.537649       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1019 16:22:17.554977       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	I1019 16:22:19.528661       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 19 16:26:19 addons-305823 kubelet[1500]: E1019 16:26:19.166003    1500 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1760891179165547215  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598025}  inodes_used:{value:201}}"
	Oct 19 16:26:19 addons-305823 kubelet[1500]: I1019 16:26:19.658870    1500 scope.go:117] "RemoveContainer" containerID="6b4a0000e0b0ce51697b76c81b302f510508e4c68772bcd841be9c3f5495c8f5"
	Oct 19 16:26:19 addons-305823 kubelet[1500]: I1019 16:26:19.777433    1500 scope.go:117] "RemoveContainer" containerID="e2ccd90c98d964fc5146ddae1a6f39ffc64b26edbcfbd329e99cee6fb3d84a1b"
	Oct 19 16:26:19 addons-305823 kubelet[1500]: I1019 16:26:19.892119    1500 scope.go:117] "RemoveContainer" containerID="c2bec065be87f6b6b58d9445182d2abb393882e02f303ee1fa60918c0237a7c6"
	Oct 19 16:26:26 addons-305823 kubelet[1500]: I1019 16:26:26.898250    1500 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/amd-gpu-device-plugin-c8fj2" secret="" err="secret \"gcp-auth\" not found"
	Oct 19 16:26:29 addons-305823 kubelet[1500]: E1019 16:26:29.170350    1500 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1760891189169875647  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598025}  inodes_used:{value:201}}"
	Oct 19 16:26:29 addons-305823 kubelet[1500]: E1019 16:26:29.170389    1500 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1760891189169875647  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598025}  inodes_used:{value:201}}"
	Oct 19 16:26:39 addons-305823 kubelet[1500]: E1019 16:26:39.172810    1500 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1760891199172253549  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598025}  inodes_used:{value:201}}"
	Oct 19 16:26:39 addons-305823 kubelet[1500]: E1019 16:26:39.172834    1500 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1760891199172253549  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598025}  inodes_used:{value:201}}"
	Oct 19 16:26:49 addons-305823 kubelet[1500]: E1019 16:26:49.175718    1500 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1760891209175389331  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598025}  inodes_used:{value:201}}"
	Oct 19 16:26:49 addons-305823 kubelet[1500]: E1019 16:26:49.176134    1500 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1760891209175389331  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598025}  inodes_used:{value:201}}"
	Oct 19 16:26:59 addons-305823 kubelet[1500]: E1019 16:26:59.178910    1500 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1760891219178362790  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598025}  inodes_used:{value:201}}"
	Oct 19 16:26:59 addons-305823 kubelet[1500]: E1019 16:26:59.178957    1500 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1760891219178362790  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598025}  inodes_used:{value:201}}"
	Oct 19 16:27:09 addons-305823 kubelet[1500]: E1019 16:27:09.182067    1500 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1760891229181663821  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598025}  inodes_used:{value:201}}"
	Oct 19 16:27:09 addons-305823 kubelet[1500]: E1019 16:27:09.182105    1500 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1760891229181663821  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598025}  inodes_used:{value:201}}"
	Oct 19 16:27:19 addons-305823 kubelet[1500]: E1019 16:27:19.184419    1500 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1760891239184071948  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598025}  inodes_used:{value:201}}"
	Oct 19 16:27:19 addons-305823 kubelet[1500]: E1019 16:27:19.184458    1500 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1760891239184071948  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598025}  inodes_used:{value:201}}"
	Oct 19 16:27:19 addons-305823 kubelet[1500]: I1019 16:27:19.898101    1500 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Oct 19 16:27:29 addons-305823 kubelet[1500]: E1019 16:27:29.186599    1500 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1760891249186215075  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598025}  inodes_used:{value:201}}"
	Oct 19 16:27:29 addons-305823 kubelet[1500]: E1019 16:27:29.186636    1500 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1760891249186215075  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598025}  inodes_used:{value:201}}"
	Oct 19 16:27:36 addons-305823 kubelet[1500]: I1019 16:27:36.898609    1500 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/amd-gpu-device-plugin-c8fj2" secret="" err="secret \"gcp-auth\" not found"
	Oct 19 16:27:39 addons-305823 kubelet[1500]: E1019 16:27:39.189240    1500 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1760891259188891587  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598025}  inodes_used:{value:201}}"
	Oct 19 16:27:39 addons-305823 kubelet[1500]: E1019 16:27:39.189266    1500 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1760891259188891587  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598025}  inodes_used:{value:201}}"
	Oct 19 16:27:45 addons-305823 kubelet[1500]: E1019 16:27:45.264053    1500 status_manager.go:1018] "Failed to get status for pod" err="pods \"hello-world-app-5d498dc89-tqklq\" is forbidden: User \"system:node:addons-305823\" cannot get resource \"pods\" in API group \"\" in the namespace \"default\": no relationship found between node 'addons-305823' and this object" podUID="cb4b3f74-04fa-404d-b670-18c306ce68af" pod="default/hello-world-app-5d498dc89-tqklq"
	Oct 19 16:27:45 addons-305823 kubelet[1500]: I1019 16:27:45.383917    1500 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n4mbk\" (UniqueName: \"kubernetes.io/projected/cb4b3f74-04fa-404d-b670-18c306ce68af-kube-api-access-n4mbk\") pod \"hello-world-app-5d498dc89-tqklq\" (UID: \"cb4b3f74-04fa-404d-b670-18c306ce68af\") " pod="default/hello-world-app-5d498dc89-tqklq"
	
	
	==> storage-provisioner [8b555b09a507a78139208843213f8fcf702895b8133b4ef4c3b5916c43bf9eff] <==
	W1019 16:27:22.070475       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 16:27:24.073802       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 16:27:24.079047       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 16:27:26.082113       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 16:27:26.086556       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 16:27:28.089711       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 16:27:28.097058       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 16:27:30.100020       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 16:27:30.104727       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 16:27:32.107360       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 16:27:32.112577       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 16:27:34.116589       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 16:27:34.123571       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 16:27:36.127164       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 16:27:36.132453       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 16:27:38.136279       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 16:27:38.141325       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 16:27:40.145066       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 16:27:40.153240       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 16:27:42.156767       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 16:27:42.166204       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 16:27:44.170689       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 16:27:44.178025       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 16:27:46.182267       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1019 16:27:46.188099       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-305823 -n addons-305823
helpers_test.go:269: (dbg) Run:  kubectl --context addons-305823 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: hello-world-app-5d498dc89-tqklq ingress-nginx-admission-create-tjjvd ingress-nginx-admission-patch-sk2gw
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-305823 describe pod hello-world-app-5d498dc89-tqklq ingress-nginx-admission-create-tjjvd ingress-nginx-admission-patch-sk2gw
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-305823 describe pod hello-world-app-5d498dc89-tqklq ingress-nginx-admission-create-tjjvd ingress-nginx-admission-patch-sk2gw: exit status 1 (67.474236ms)

                                                
                                                
-- stdout --
	Name:             hello-world-app-5d498dc89-tqklq
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-305823/
	Labels:           app=hello-world-app
	                  pod-template-hash=5d498dc89
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/hello-world-app-5d498dc89
	Containers:
	  hello-world-app:
	    Image:        docker.io/kicbase/echo-server:1.0
	    Port:         8080/TCP
	    Host Port:    0/TCP
	    Environment:  <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-n4mbk (ro)
	Conditions:
	  Type           Status
	  PodScheduled   True 
	Volumes:
	  kube-api-access-n4mbk:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  2s    default-scheduler  Successfully assigned default/hello-world-app-5d498dc89-tqklq to addons-305823
	  Normal  Pulling    2s    kubelet            Pulling image "docker.io/kicbase/echo-server:1.0"

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-tjjvd" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-sk2gw" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-305823 describe pod hello-world-app-5d498dc89-tqklq ingress-nginx-admission-create-tjjvd ingress-nginx-admission-patch-sk2gw: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-305823 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-305823 addons disable ingress-dns --alsologtostderr -v=1: (1.317327989s)
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-305823 addons disable ingress --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-305823 addons disable ingress --alsologtostderr -v=1: (7.772054281s)
--- FAIL: TestAddons/parallel/Ingress (159.33s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (2.95s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-amd64 -p functional-244936 image rm kicbase/echo-server:functional-244936 --alsologtostderr
functional_test.go:407: (dbg) Done: out/minikube-linux-amd64 -p functional-244936 image rm kicbase/echo-server:functional-244936 --alsologtostderr: (2.666095094s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-244936 image ls
functional_test.go:418: expected "kicbase/echo-server:functional-244936" to be removed from minikube but still exists
--- FAIL: TestFunctional/parallel/ImageCommands/ImageRemove (2.95s)

                                                
                                    
x
+
TestPreload (137.15s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:43: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-360119 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.32.0
preload_test.go:43: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-360119 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.32.0: (1m8.622739886s)
preload_test.go:51: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-360119 image pull gcr.io/k8s-minikube/busybox
E1019 17:13:34.096751  278280 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-274250/.minikube/profiles/functional-244936/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:51: (dbg) Done: out/minikube-linux-amd64 -p test-preload-360119 image pull gcr.io/k8s-minikube/busybox: (3.415266794s)
preload_test.go:57: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-360119
preload_test.go:57: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-360119: (7.321838303s)
preload_test.go:65: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-360119 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
preload_test.go:65: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-360119 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (54.9601344s)
preload_test.go:70: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-360119 image list
preload_test.go:75: Expected to find gcr.io/k8s-minikube/busybox in image list output, instead got 
-- stdout --
	registry.k8s.io/pause:3.10
	registry.k8s.io/kube-scheduler:v1.32.0
	registry.k8s.io/kube-proxy:v1.32.0
	registry.k8s.io/kube-controller-manager:v1.32.0
	registry.k8s.io/kube-apiserver:v1.32.0
	registry.k8s.io/etcd:3.5.16-0
	registry.k8s.io/coredns/coredns:v1.11.3
	gcr.io/k8s-minikube/storage-provisioner:v5
	docker.io/kindest/kindnetd:v20241108-5c6d2daf

                                                
                                                
-- /stdout --
panic.go:636: *** TestPreload FAILED at 2025-10-19 17:14:39.888428687 +0000 UTC m=+3217.203123810
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPreload]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-360119 -n test-preload-360119
helpers_test.go:252: <<< TestPreload FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPreload]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-360119 logs -n 25
helpers_test.go:260: TestPreload logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                        ARGS                                                                                         │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ multinode-470285 ssh -n multinode-470285-m03 sudo cat /home/docker/cp-test.txt                                                                                                      │ multinode-470285     │ jenkins │ v1.37.0 │ 19 Oct 25 17:01 UTC │ 19 Oct 25 17:01 UTC │
	│ ssh     │ multinode-470285 ssh -n multinode-470285 sudo cat /home/docker/cp-test_multinode-470285-m03_multinode-470285.txt                                                                    │ multinode-470285     │ jenkins │ v1.37.0 │ 19 Oct 25 17:01 UTC │ 19 Oct 25 17:01 UTC │
	│ cp      │ multinode-470285 cp multinode-470285-m03:/home/docker/cp-test.txt multinode-470285-m02:/home/docker/cp-test_multinode-470285-m03_multinode-470285-m02.txt                           │ multinode-470285     │ jenkins │ v1.37.0 │ 19 Oct 25 17:01 UTC │ 19 Oct 25 17:01 UTC │
	│ ssh     │ multinode-470285 ssh -n multinode-470285-m03 sudo cat /home/docker/cp-test.txt                                                                                                      │ multinode-470285     │ jenkins │ v1.37.0 │ 19 Oct 25 17:01 UTC │ 19 Oct 25 17:01 UTC │
	│ ssh     │ multinode-470285 ssh -n multinode-470285-m02 sudo cat /home/docker/cp-test_multinode-470285-m03_multinode-470285-m02.txt                                                            │ multinode-470285     │ jenkins │ v1.37.0 │ 19 Oct 25 17:01 UTC │ 19 Oct 25 17:01 UTC │
	│ node    │ multinode-470285 node stop m03                                                                                                                                                      │ multinode-470285     │ jenkins │ v1.37.0 │ 19 Oct 25 17:01 UTC │ 19 Oct 25 17:01 UTC │
	│ node    │ multinode-470285 node start m03 -v=5 --alsologtostderr                                                                                                                              │ multinode-470285     │ jenkins │ v1.37.0 │ 19 Oct 25 17:01 UTC │ 19 Oct 25 17:02 UTC │
	│ node    │ list -p multinode-470285                                                                                                                                                            │ multinode-470285     │ jenkins │ v1.37.0 │ 19 Oct 25 17:02 UTC │                     │
	│ stop    │ -p multinode-470285                                                                                                                                                                 │ multinode-470285     │ jenkins │ v1.37.0 │ 19 Oct 25 17:02 UTC │ 19 Oct 25 17:05 UTC │
	│ start   │ -p multinode-470285 --wait=true -v=5 --alsologtostderr                                                                                                                              │ multinode-470285     │ jenkins │ v1.37.0 │ 19 Oct 25 17:05 UTC │ 19 Oct 25 17:07 UTC │
	│ node    │ list -p multinode-470285                                                                                                                                                            │ multinode-470285     │ jenkins │ v1.37.0 │ 19 Oct 25 17:07 UTC │                     │
	│ node    │ multinode-470285 node delete m03                                                                                                                                                    │ multinode-470285     │ jenkins │ v1.37.0 │ 19 Oct 25 17:07 UTC │ 19 Oct 25 17:07 UTC │
	│ stop    │ multinode-470285 stop                                                                                                                                                               │ multinode-470285     │ jenkins │ v1.37.0 │ 19 Oct 25 17:07 UTC │ 19 Oct 25 17:10 UTC │
	│ start   │ -p multinode-470285 --wait=true -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                                          │ multinode-470285     │ jenkins │ v1.37.0 │ 19 Oct 25 17:10 UTC │ 19 Oct 25 17:11 UTC │
	│ node    │ list -p multinode-470285                                                                                                                                                            │ multinode-470285     │ jenkins │ v1.37.0 │ 19 Oct 25 17:11 UTC │                     │
	│ start   │ -p multinode-470285-m02 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                                                                         │ multinode-470285-m02 │ jenkins │ v1.37.0 │ 19 Oct 25 17:11 UTC │                     │
	│ start   │ -p multinode-470285-m03 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                                                                         │ multinode-470285-m03 │ jenkins │ v1.37.0 │ 19 Oct 25 17:11 UTC │ 19 Oct 25 17:12 UTC │
	│ node    │ add -p multinode-470285                                                                                                                                                             │ multinode-470285     │ jenkins │ v1.37.0 │ 19 Oct 25 17:12 UTC │                     │
	│ delete  │ -p multinode-470285-m03                                                                                                                                                             │ multinode-470285-m03 │ jenkins │ v1.37.0 │ 19 Oct 25 17:12 UTC │ 19 Oct 25 17:12 UTC │
	│ delete  │ -p multinode-470285                                                                                                                                                                 │ multinode-470285     │ jenkins │ v1.37.0 │ 19 Oct 25 17:12 UTC │ 19 Oct 25 17:12 UTC │
	│ start   │ -p test-preload-360119 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.32.0 │ test-preload-360119  │ jenkins │ v1.37.0 │ 19 Oct 25 17:12 UTC │ 19 Oct 25 17:13 UTC │
	│ image   │ test-preload-360119 image pull gcr.io/k8s-minikube/busybox                                                                                                                          │ test-preload-360119  │ jenkins │ v1.37.0 │ 19 Oct 25 17:13 UTC │ 19 Oct 25 17:13 UTC │
	│ stop    │ -p test-preload-360119                                                                                                                                                              │ test-preload-360119  │ jenkins │ v1.37.0 │ 19 Oct 25 17:13 UTC │ 19 Oct 25 17:13 UTC │
	│ start   │ -p test-preload-360119 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                         │ test-preload-360119  │ jenkins │ v1.37.0 │ 19 Oct 25 17:13 UTC │ 19 Oct 25 17:14 UTC │
	│ image   │ test-preload-360119 image list                                                                                                                                                      │ test-preload-360119  │ jenkins │ v1.37.0 │ 19 Oct 25 17:14 UTC │ 19 Oct 25 17:14 UTC │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/19 17:13:44
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1019 17:13:44.763154  308549 out.go:360] Setting OutFile to fd 1 ...
	I1019 17:13:44.763388  308549 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 17:13:44.763397  308549 out.go:374] Setting ErrFile to fd 2...
	I1019 17:13:44.763401  308549 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 17:13:44.763591  308549 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21683-274250/.minikube/bin
	I1019 17:13:44.764026  308549 out.go:368] Setting JSON to false
	I1019 17:13:44.764866  308549 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":10567,"bootTime":1760883458,"procs":180,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1019 17:13:44.764961  308549 start.go:143] virtualization: kvm guest
	I1019 17:13:44.766570  308549 out.go:179] * [test-preload-360119] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1019 17:13:44.767616  308549 notify.go:221] Checking for updates...
	I1019 17:13:44.767618  308549 out.go:179]   - MINIKUBE_LOCATION=21683
	I1019 17:13:44.768576  308549 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1019 17:13:44.769442  308549 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21683-274250/kubeconfig
	I1019 17:13:44.770458  308549 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21683-274250/.minikube
	I1019 17:13:44.771459  308549 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1019 17:13:44.772424  308549 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1019 17:13:44.773765  308549 config.go:182] Loaded profile config "test-preload-360119": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1019 17:13:44.774212  308549 main.go:143] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1019 17:13:44.774262  308549 main.go:143] libmachine: Launching plugin server for driver kvm2
	I1019 17:13:44.789284  308549 main.go:143] libmachine: Plugin server listening at address 127.0.0.1:38535
	I1019 17:13:44.789719  308549 main.go:143] libmachine: () Calling .GetVersion
	I1019 17:13:44.790232  308549 main.go:143] libmachine: Using API Version  1
	I1019 17:13:44.790279  308549 main.go:143] libmachine: () Calling .SetConfigRaw
	I1019 17:13:44.790636  308549 main.go:143] libmachine: () Calling .GetMachineName
	I1019 17:13:44.790831  308549 main.go:143] libmachine: (test-preload-360119) Calling .DriverName
	I1019 17:13:44.792214  308549 out.go:179] * Kubernetes 1.34.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.34.1
	I1019 17:13:44.793106  308549 driver.go:422] Setting default libvirt URI to qemu:///system
	I1019 17:13:44.793478  308549 main.go:143] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1019 17:13:44.793529  308549 main.go:143] libmachine: Launching plugin server for driver kvm2
	I1019 17:13:44.806861  308549 main.go:143] libmachine: Plugin server listening at address 127.0.0.1:46781
	I1019 17:13:44.807301  308549 main.go:143] libmachine: () Calling .GetVersion
	I1019 17:13:44.807676  308549 main.go:143] libmachine: Using API Version  1
	I1019 17:13:44.807693  308549 main.go:143] libmachine: () Calling .SetConfigRaw
	I1019 17:13:44.808033  308549 main.go:143] libmachine: () Calling .GetMachineName
	I1019 17:13:44.808250  308549 main.go:143] libmachine: (test-preload-360119) Calling .DriverName
	I1019 17:13:44.840162  308549 out.go:179] * Using the kvm2 driver based on existing profile
	I1019 17:13:44.841024  308549 start.go:309] selected driver: kvm2
	I1019 17:13:44.841040  308549 start.go:930] validating driver "kvm2" against &{Name:test-preload-360119 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.32.0 ClusterName:test-preload-360119 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.174 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:2621
44 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1019 17:13:44.841134  308549 start.go:941] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1019 17:13:44.841831  308549 install.go:66] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1019 17:13:44.841897  308549 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21683-274250/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1019 17:13:44.855282  308549 install.go:163] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I1019 17:13:44.855303  308549 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21683-274250/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1019 17:13:44.868287  308549 install.go:163] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I1019 17:13:44.868673  308549 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1019 17:13:44.868704  308549 cni.go:84] Creating CNI manager for ""
	I1019 17:13:44.868747  308549 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1019 17:13:44.868795  308549 start.go:353] cluster config:
	{Name:test-preload-360119 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:test-preload-360119 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.174 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disa
bleOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1019 17:13:44.868883  308549 iso.go:125] acquiring lock: {Name:mk7c0069e2cf0a68d4955dec96c59ff341a488dd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1019 17:13:44.870688  308549 out.go:179] * Starting "test-preload-360119" primary control-plane node in "test-preload-360119" cluster
	I1019 17:13:44.871521  308549 preload.go:183] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I1019 17:13:44.980229  308549 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.32.0/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4
	I1019 17:13:44.980261  308549 cache.go:59] Caching tarball of preloaded images
	I1019 17:13:44.980417  308549 preload.go:183] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I1019 17:13:44.982022  308549 out.go:179] * Downloading Kubernetes v1.32.0 preload ...
	I1019 17:13:44.982885  308549 preload.go:313] getting checksum for preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4 from gcs api...
	I1019 17:13:45.094477  308549 preload.go:290] Got checksum from GCS API "2acdb4dde52794f2167c79dcee7507ae"
	I1019 17:13:45.094534  308549 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.32.0/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:2acdb4dde52794f2167c79dcee7507ae -> /home/jenkins/minikube-integration/21683-274250/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4
	I1019 17:13:55.787436  308549 cache.go:62] Finished verifying existence of preloaded tar for v1.32.0 on crio
	I1019 17:13:55.787566  308549 profile.go:143] Saving config to /home/jenkins/minikube-integration/21683-274250/.minikube/profiles/test-preload-360119/config.json ...
	I1019 17:13:55.787791  308549 start.go:360] acquireMachinesLock for test-preload-360119: {Name:mk3b19946e20646ec6cf08c56ebb92a1f48fa1bb Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1019 17:13:55.787857  308549 start.go:364] duration metric: took 43.354µs to acquireMachinesLock for "test-preload-360119"
	I1019 17:13:55.787893  308549 start.go:96] Skipping create...Using existing machine configuration
	I1019 17:13:55.787902  308549 fix.go:54] fixHost starting: 
	I1019 17:13:55.788231  308549 main.go:143] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1019 17:13:55.788346  308549 main.go:143] libmachine: Launching plugin server for driver kvm2
	I1019 17:13:55.801866  308549 main.go:143] libmachine: Plugin server listening at address 127.0.0.1:41121
	I1019 17:13:55.802391  308549 main.go:143] libmachine: () Calling .GetVersion
	I1019 17:13:55.802946  308549 main.go:143] libmachine: Using API Version  1
	I1019 17:13:55.802972  308549 main.go:143] libmachine: () Calling .SetConfigRaw
	I1019 17:13:55.803355  308549 main.go:143] libmachine: () Calling .GetMachineName
	I1019 17:13:55.803600  308549 main.go:143] libmachine: (test-preload-360119) Calling .DriverName
	I1019 17:13:55.803742  308549 main.go:143] libmachine: (test-preload-360119) Calling .GetState
	I1019 17:13:55.805426  308549 fix.go:112] recreateIfNeeded on test-preload-360119: state=Stopped err=<nil>
	I1019 17:13:55.805448  308549 main.go:143] libmachine: (test-preload-360119) Calling .DriverName
	W1019 17:13:55.805627  308549 fix.go:138] unexpected machine state, will restart: <nil>
	I1019 17:13:55.807266  308549 out.go:252] * Restarting existing kvm2 VM for "test-preload-360119" ...
	I1019 17:13:55.807292  308549 main.go:143] libmachine: (test-preload-360119) Calling .Start
	I1019 17:13:55.807444  308549 main.go:143] libmachine: (test-preload-360119) starting domain...
	I1019 17:13:55.807468  308549 main.go:143] libmachine: (test-preload-360119) ensuring networks are active...
	I1019 17:13:55.808166  308549 main.go:143] libmachine: (test-preload-360119) Ensuring network default is active
	I1019 17:13:55.808557  308549 main.go:143] libmachine: (test-preload-360119) Ensuring network mk-test-preload-360119 is active
	I1019 17:13:55.809165  308549 main.go:143] libmachine: (test-preload-360119) getting domain XML...
	I1019 17:13:55.810346  308549 main.go:143] libmachine: (test-preload-360119) DBG | starting domain XML:
	I1019 17:13:55.810365  308549 main.go:143] libmachine: (test-preload-360119) DBG | <domain type='kvm'>
	I1019 17:13:55.810376  308549 main.go:143] libmachine: (test-preload-360119) DBG |   <name>test-preload-360119</name>
	I1019 17:13:55.810385  308549 main.go:143] libmachine: (test-preload-360119) DBG |   <uuid>cdc7f770-26aa-4b6e-af53-c4fd14dcca90</uuid>
	I1019 17:13:55.810400  308549 main.go:143] libmachine: (test-preload-360119) DBG |   <memory unit='KiB'>3145728</memory>
	I1019 17:13:55.810413  308549 main.go:143] libmachine: (test-preload-360119) DBG |   <currentMemory unit='KiB'>3145728</currentMemory>
	I1019 17:13:55.810425  308549 main.go:143] libmachine: (test-preload-360119) DBG |   <vcpu placement='static'>2</vcpu>
	I1019 17:13:55.810435  308549 main.go:143] libmachine: (test-preload-360119) DBG |   <os>
	I1019 17:13:55.810447  308549 main.go:143] libmachine: (test-preload-360119) DBG |     <type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
	I1019 17:13:55.810474  308549 main.go:143] libmachine: (test-preload-360119) DBG |     <boot dev='cdrom'/>
	I1019 17:13:55.810508  308549 main.go:143] libmachine: (test-preload-360119) DBG |     <boot dev='hd'/>
	I1019 17:13:55.810526  308549 main.go:143] libmachine: (test-preload-360119) DBG |     <bootmenu enable='no'/>
	I1019 17:13:55.810534  308549 main.go:143] libmachine: (test-preload-360119) DBG |   </os>
	I1019 17:13:55.810540  308549 main.go:143] libmachine: (test-preload-360119) DBG |   <features>
	I1019 17:13:55.810545  308549 main.go:143] libmachine: (test-preload-360119) DBG |     <acpi/>
	I1019 17:13:55.810551  308549 main.go:143] libmachine: (test-preload-360119) DBG |     <apic/>
	I1019 17:13:55.810556  308549 main.go:143] libmachine: (test-preload-360119) DBG |     <pae/>
	I1019 17:13:55.810560  308549 main.go:143] libmachine: (test-preload-360119) DBG |   </features>
	I1019 17:13:55.810567  308549 main.go:143] libmachine: (test-preload-360119) DBG |   <cpu mode='host-passthrough' check='none' migratable='on'/>
	I1019 17:13:55.810572  308549 main.go:143] libmachine: (test-preload-360119) DBG |   <clock offset='utc'/>
	I1019 17:13:55.810577  308549 main.go:143] libmachine: (test-preload-360119) DBG |   <on_poweroff>destroy</on_poweroff>
	I1019 17:13:55.810582  308549 main.go:143] libmachine: (test-preload-360119) DBG |   <on_reboot>restart</on_reboot>
	I1019 17:13:55.810587  308549 main.go:143] libmachine: (test-preload-360119) DBG |   <on_crash>destroy</on_crash>
	I1019 17:13:55.810591  308549 main.go:143] libmachine: (test-preload-360119) DBG |   <devices>
	I1019 17:13:55.810597  308549 main.go:143] libmachine: (test-preload-360119) DBG |     <emulator>/usr/bin/qemu-system-x86_64</emulator>
	I1019 17:13:55.810602  308549 main.go:143] libmachine: (test-preload-360119) DBG |     <disk type='file' device='cdrom'>
	I1019 17:13:55.810628  308549 main.go:143] libmachine: (test-preload-360119) DBG |       <driver name='qemu' type='raw'/>
	I1019 17:13:55.810657  308549 main.go:143] libmachine: (test-preload-360119) DBG |       <source file='/home/jenkins/minikube-integration/21683-274250/.minikube/machines/test-preload-360119/boot2docker.iso'/>
	I1019 17:13:55.810680  308549 main.go:143] libmachine: (test-preload-360119) DBG |       <target dev='hdc' bus='scsi'/>
	I1019 17:13:55.810694  308549 main.go:143] libmachine: (test-preload-360119) DBG |       <readonly/>
	I1019 17:13:55.810704  308549 main.go:143] libmachine: (test-preload-360119) DBG |       <address type='drive' controller='0' bus='0' target='0' unit='2'/>
	I1019 17:13:55.810725  308549 main.go:143] libmachine: (test-preload-360119) DBG |     </disk>
	I1019 17:13:55.810737  308549 main.go:143] libmachine: (test-preload-360119) DBG |     <disk type='file' device='disk'>
	I1019 17:13:55.810752  308549 main.go:143] libmachine: (test-preload-360119) DBG |       <driver name='qemu' type='raw' io='threads'/>
	I1019 17:13:55.810772  308549 main.go:143] libmachine: (test-preload-360119) DBG |       <source file='/home/jenkins/minikube-integration/21683-274250/.minikube/machines/test-preload-360119/test-preload-360119.rawdisk'/>
	I1019 17:13:55.810785  308549 main.go:143] libmachine: (test-preload-360119) DBG |       <target dev='hda' bus='virtio'/>
	I1019 17:13:55.810797  308549 main.go:143] libmachine: (test-preload-360119) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
	I1019 17:13:55.810806  308549 main.go:143] libmachine: (test-preload-360119) DBG |     </disk>
	I1019 17:13:55.810813  308549 main.go:143] libmachine: (test-preload-360119) DBG |     <controller type='usb' index='0' model='piix3-uhci'>
	I1019 17:13:55.810830  308549 main.go:143] libmachine: (test-preload-360119) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
	I1019 17:13:55.810845  308549 main.go:143] libmachine: (test-preload-360119) DBG |     </controller>
	I1019 17:13:55.810859  308549 main.go:143] libmachine: (test-preload-360119) DBG |     <controller type='pci' index='0' model='pci-root'/>
	I1019 17:13:55.810870  308549 main.go:143] libmachine: (test-preload-360119) DBG |     <controller type='scsi' index='0' model='lsilogic'>
	I1019 17:13:55.810884  308549 main.go:143] libmachine: (test-preload-360119) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
	I1019 17:13:55.810892  308549 main.go:143] libmachine: (test-preload-360119) DBG |     </controller>
	I1019 17:13:55.810899  308549 main.go:143] libmachine: (test-preload-360119) DBG |     <interface type='network'>
	I1019 17:13:55.810916  308549 main.go:143] libmachine: (test-preload-360119) DBG |       <mac address='52:54:00:de:63:9b'/>
	I1019 17:13:55.810929  308549 main.go:143] libmachine: (test-preload-360119) DBG |       <source network='mk-test-preload-360119'/>
	I1019 17:13:55.810944  308549 main.go:143] libmachine: (test-preload-360119) DBG |       <model type='virtio'/>
	I1019 17:13:55.810958  308549 main.go:143] libmachine: (test-preload-360119) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
	I1019 17:13:55.810970  308549 main.go:143] libmachine: (test-preload-360119) DBG |     </interface>
	I1019 17:13:55.811019  308549 main.go:143] libmachine: (test-preload-360119) DBG |     <interface type='network'>
	I1019 17:13:55.811036  308549 main.go:143] libmachine: (test-preload-360119) DBG |       <mac address='52:54:00:cd:d8:55'/>
	I1019 17:13:55.811046  308549 main.go:143] libmachine: (test-preload-360119) DBG |       <source network='default'/>
	I1019 17:13:55.811054  308549 main.go:143] libmachine: (test-preload-360119) DBG |       <model type='virtio'/>
	I1019 17:13:55.811075  308549 main.go:143] libmachine: (test-preload-360119) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
	I1019 17:13:55.811094  308549 main.go:143] libmachine: (test-preload-360119) DBG |     </interface>
	I1019 17:13:55.811107  308549 main.go:143] libmachine: (test-preload-360119) DBG |     <serial type='pty'>
	I1019 17:13:55.811118  308549 main.go:143] libmachine: (test-preload-360119) DBG |       <target type='isa-serial' port='0'>
	I1019 17:13:55.811130  308549 main.go:143] libmachine: (test-preload-360119) DBG |         <model name='isa-serial'/>
	I1019 17:13:55.811140  308549 main.go:143] libmachine: (test-preload-360119) DBG |       </target>
	I1019 17:13:55.811146  308549 main.go:143] libmachine: (test-preload-360119) DBG |     </serial>
	I1019 17:13:55.811154  308549 main.go:143] libmachine: (test-preload-360119) DBG |     <console type='pty'>
	I1019 17:13:55.811167  308549 main.go:143] libmachine: (test-preload-360119) DBG |       <target type='serial' port='0'/>
	I1019 17:13:55.811178  308549 main.go:143] libmachine: (test-preload-360119) DBG |     </console>
	I1019 17:13:55.811190  308549 main.go:143] libmachine: (test-preload-360119) DBG |     <input type='mouse' bus='ps2'/>
	I1019 17:13:55.811201  308549 main.go:143] libmachine: (test-preload-360119) DBG |     <input type='keyboard' bus='ps2'/>
	I1019 17:13:55.811210  308549 main.go:143] libmachine: (test-preload-360119) DBG |     <audio id='1' type='none'/>
	I1019 17:13:55.811236  308549 main.go:143] libmachine: (test-preload-360119) DBG |     <memballoon model='virtio'>
	I1019 17:13:55.811251  308549 main.go:143] libmachine: (test-preload-360119) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
	I1019 17:13:55.811262  308549 main.go:143] libmachine: (test-preload-360119) DBG |     </memballoon>
	I1019 17:13:55.811274  308549 main.go:143] libmachine: (test-preload-360119) DBG |     <rng model='virtio'>
	I1019 17:13:55.811285  308549 main.go:143] libmachine: (test-preload-360119) DBG |       <backend model='random'>/dev/random</backend>
	I1019 17:13:55.811304  308549 main.go:143] libmachine: (test-preload-360119) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
	I1019 17:13:55.811314  308549 main.go:143] libmachine: (test-preload-360119) DBG |     </rng>
	I1019 17:13:55.811335  308549 main.go:143] libmachine: (test-preload-360119) DBG |   </devices>
	I1019 17:13:55.811354  308549 main.go:143] libmachine: (test-preload-360119) DBG | </domain>
	I1019 17:13:55.811374  308549 main.go:143] libmachine: (test-preload-360119) DBG | 
	I1019 17:13:57.355691  308549 main.go:143] libmachine: (test-preload-360119) waiting for domain to start...
	I1019 17:13:57.357102  308549 main.go:143] libmachine: (test-preload-360119) domain is now running
	I1019 17:13:57.357129  308549 main.go:143] libmachine: (test-preload-360119) waiting for IP...
	I1019 17:13:57.357970  308549 main.go:143] libmachine: (test-preload-360119) DBG | domain test-preload-360119 has defined MAC address 52:54:00:de:63:9b in network mk-test-preload-360119
	I1019 17:13:57.358552  308549 main.go:143] libmachine: (test-preload-360119) found domain IP: 192.168.39.174
	I1019 17:13:57.358575  308549 main.go:143] libmachine: (test-preload-360119) DBG | domain test-preload-360119 has current primary IP address 192.168.39.174 and MAC address 52:54:00:de:63:9b in network mk-test-preload-360119
	I1019 17:13:57.358583  308549 main.go:143] libmachine: (test-preload-360119) reserving static IP address...
	I1019 17:13:57.359105  308549 main.go:143] libmachine: (test-preload-360119) DBG | found host DHCP lease matching {name: "test-preload-360119", mac: "52:54:00:de:63:9b", ip: "192.168.39.174"} in network mk-test-preload-360119: {Iface:virbr1 ExpiryTime:2025-10-19 18:12:40 +0000 UTC Type:0 Mac:52:54:00:de:63:9b Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:test-preload-360119 Clientid:01:52:54:00:de:63:9b}
	I1019 17:13:57.359133  308549 main.go:143] libmachine: (test-preload-360119) reserved static IP address 192.168.39.174 for domain test-preload-360119
	I1019 17:13:57.359151  308549 main.go:143] libmachine: (test-preload-360119) DBG | skip adding static IP to network mk-test-preload-360119 - found existing host DHCP lease matching {name: "test-preload-360119", mac: "52:54:00:de:63:9b", ip: "192.168.39.174"}
	I1019 17:13:57.359185  308549 main.go:143] libmachine: (test-preload-360119) DBG | Getting to WaitForSSH function...
	I1019 17:13:57.359199  308549 main.go:143] libmachine: (test-preload-360119) waiting for SSH...
	I1019 17:13:57.361586  308549 main.go:143] libmachine: (test-preload-360119) DBG | domain test-preload-360119 has defined MAC address 52:54:00:de:63:9b in network mk-test-preload-360119
	I1019 17:13:57.361919  308549 main.go:143] libmachine: (test-preload-360119) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:63:9b", ip: ""} in network mk-test-preload-360119: {Iface:virbr1 ExpiryTime:2025-10-19 18:12:40 +0000 UTC Type:0 Mac:52:54:00:de:63:9b Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:test-preload-360119 Clientid:01:52:54:00:de:63:9b}
	I1019 17:13:57.361946  308549 main.go:143] libmachine: (test-preload-360119) DBG | domain test-preload-360119 has defined IP address 192.168.39.174 and MAC address 52:54:00:de:63:9b in network mk-test-preload-360119
	I1019 17:13:57.362095  308549 main.go:143] libmachine: (test-preload-360119) DBG | Using SSH client type: external
	I1019 17:13:57.362118  308549 main.go:143] libmachine: (test-preload-360119) DBG | Using SSH private key: /home/jenkins/minikube-integration/21683-274250/.minikube/machines/test-preload-360119/id_rsa (-rw-------)
	I1019 17:13:57.362161  308549 main.go:143] libmachine: (test-preload-360119) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.174 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/21683-274250/.minikube/machines/test-preload-360119/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1019 17:13:57.362177  308549 main.go:143] libmachine: (test-preload-360119) DBG | About to run SSH command:
	I1019 17:13:57.362197  308549 main.go:143] libmachine: (test-preload-360119) DBG | exit 0
	I1019 17:14:07.611613  308549 main.go:143] libmachine: (test-preload-360119) DBG | SSH cmd err, output: exit status 255: 
	I1019 17:14:07.611645  308549 main.go:143] libmachine: (test-preload-360119) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I1019 17:14:07.611658  308549 main.go:143] libmachine: (test-preload-360119) DBG | command : exit 0
	I1019 17:14:07.611671  308549 main.go:143] libmachine: (test-preload-360119) DBG | err     : exit status 255
	I1019 17:14:07.611684  308549 main.go:143] libmachine: (test-preload-360119) DBG | output  : 
	I1019 17:14:10.613757  308549 main.go:143] libmachine: (test-preload-360119) DBG | Getting to WaitForSSH function...
	I1019 17:14:10.616568  308549 main.go:143] libmachine: (test-preload-360119) DBG | domain test-preload-360119 has defined MAC address 52:54:00:de:63:9b in network mk-test-preload-360119
	I1019 17:14:10.617022  308549 main.go:143] libmachine: (test-preload-360119) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:63:9b", ip: ""} in network mk-test-preload-360119: {Iface:virbr1 ExpiryTime:2025-10-19 18:14:07 +0000 UTC Type:0 Mac:52:54:00:de:63:9b Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:test-preload-360119 Clientid:01:52:54:00:de:63:9b}
	I1019 17:14:10.617056  308549 main.go:143] libmachine: (test-preload-360119) DBG | domain test-preload-360119 has defined IP address 192.168.39.174 and MAC address 52:54:00:de:63:9b in network mk-test-preload-360119
	I1019 17:14:10.617251  308549 main.go:143] libmachine: (test-preload-360119) DBG | Using SSH client type: external
	I1019 17:14:10.617279  308549 main.go:143] libmachine: (test-preload-360119) DBG | Using SSH private key: /home/jenkins/minikube-integration/21683-274250/.minikube/machines/test-preload-360119/id_rsa (-rw-------)
	I1019 17:14:10.617327  308549 main.go:143] libmachine: (test-preload-360119) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.174 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/21683-274250/.minikube/machines/test-preload-360119/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1019 17:14:10.617341  308549 main.go:143] libmachine: (test-preload-360119) DBG | About to run SSH command:
	I1019 17:14:10.617397  308549 main.go:143] libmachine: (test-preload-360119) DBG | exit 0
	I1019 17:14:10.743484  308549 main.go:143] libmachine: (test-preload-360119) DBG | SSH cmd err, output: <nil>: 
	I1019 17:14:10.743882  308549 main.go:143] libmachine: (test-preload-360119) Calling .GetConfigRaw
	I1019 17:14:10.744580  308549 main.go:143] libmachine: (test-preload-360119) Calling .GetIP
	I1019 17:14:10.747384  308549 main.go:143] libmachine: (test-preload-360119) DBG | domain test-preload-360119 has defined MAC address 52:54:00:de:63:9b in network mk-test-preload-360119
	I1019 17:14:10.747756  308549 main.go:143] libmachine: (test-preload-360119) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:63:9b", ip: ""} in network mk-test-preload-360119: {Iface:virbr1 ExpiryTime:2025-10-19 18:14:07 +0000 UTC Type:0 Mac:52:54:00:de:63:9b Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:test-preload-360119 Clientid:01:52:54:00:de:63:9b}
	I1019 17:14:10.747777  308549 main.go:143] libmachine: (test-preload-360119) DBG | domain test-preload-360119 has defined IP address 192.168.39.174 and MAC address 52:54:00:de:63:9b in network mk-test-preload-360119
	I1019 17:14:10.748073  308549 profile.go:143] Saving config to /home/jenkins/minikube-integration/21683-274250/.minikube/profiles/test-preload-360119/config.json ...
	I1019 17:14:10.748302  308549 machine.go:94] provisionDockerMachine start ...
	I1019 17:14:10.748325  308549 main.go:143] libmachine: (test-preload-360119) Calling .DriverName
	I1019 17:14:10.748588  308549 main.go:143] libmachine: (test-preload-360119) Calling .GetSSHHostname
	I1019 17:14:10.750835  308549 main.go:143] libmachine: (test-preload-360119) DBG | domain test-preload-360119 has defined MAC address 52:54:00:de:63:9b in network mk-test-preload-360119
	I1019 17:14:10.751218  308549 main.go:143] libmachine: (test-preload-360119) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:63:9b", ip: ""} in network mk-test-preload-360119: {Iface:virbr1 ExpiryTime:2025-10-19 18:14:07 +0000 UTC Type:0 Mac:52:54:00:de:63:9b Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:test-preload-360119 Clientid:01:52:54:00:de:63:9b}
	I1019 17:14:10.751256  308549 main.go:143] libmachine: (test-preload-360119) DBG | domain test-preload-360119 has defined IP address 192.168.39.174 and MAC address 52:54:00:de:63:9b in network mk-test-preload-360119
	I1019 17:14:10.751422  308549 main.go:143] libmachine: (test-preload-360119) Calling .GetSSHPort
	I1019 17:14:10.751607  308549 main.go:143] libmachine: (test-preload-360119) Calling .GetSSHKeyPath
	I1019 17:14:10.751764  308549 main.go:143] libmachine: (test-preload-360119) Calling .GetSSHKeyPath
	I1019 17:14:10.751903  308549 main.go:143] libmachine: (test-preload-360119) Calling .GetSSHUsername
	I1019 17:14:10.752103  308549 main.go:143] libmachine: Using SSH client type: native
	I1019 17:14:10.752399  308549 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.174 22 <nil> <nil>}
	I1019 17:14:10.752416  308549 main.go:143] libmachine: About to run SSH command:
	hostname
	I1019 17:14:10.855406  308549 main.go:143] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1019 17:14:10.855438  308549 main.go:143] libmachine: (test-preload-360119) Calling .GetMachineName
	I1019 17:14:10.855745  308549 buildroot.go:166] provisioning hostname "test-preload-360119"
	I1019 17:14:10.855780  308549 main.go:143] libmachine: (test-preload-360119) Calling .GetMachineName
	I1019 17:14:10.856016  308549 main.go:143] libmachine: (test-preload-360119) Calling .GetSSHHostname
	I1019 17:14:10.859171  308549 main.go:143] libmachine: (test-preload-360119) DBG | domain test-preload-360119 has defined MAC address 52:54:00:de:63:9b in network mk-test-preload-360119
	I1019 17:14:10.859524  308549 main.go:143] libmachine: (test-preload-360119) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:63:9b", ip: ""} in network mk-test-preload-360119: {Iface:virbr1 ExpiryTime:2025-10-19 18:14:07 +0000 UTC Type:0 Mac:52:54:00:de:63:9b Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:test-preload-360119 Clientid:01:52:54:00:de:63:9b}
	I1019 17:14:10.859552  308549 main.go:143] libmachine: (test-preload-360119) DBG | domain test-preload-360119 has defined IP address 192.168.39.174 and MAC address 52:54:00:de:63:9b in network mk-test-preload-360119
	I1019 17:14:10.859692  308549 main.go:143] libmachine: (test-preload-360119) Calling .GetSSHPort
	I1019 17:14:10.859889  308549 main.go:143] libmachine: (test-preload-360119) Calling .GetSSHKeyPath
	I1019 17:14:10.860067  308549 main.go:143] libmachine: (test-preload-360119) Calling .GetSSHKeyPath
	I1019 17:14:10.860235  308549 main.go:143] libmachine: (test-preload-360119) Calling .GetSSHUsername
	I1019 17:14:10.860402  308549 main.go:143] libmachine: Using SSH client type: native
	I1019 17:14:10.860615  308549 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.174 22 <nil> <nil>}
	I1019 17:14:10.860628  308549 main.go:143] libmachine: About to run SSH command:
	sudo hostname test-preload-360119 && echo "test-preload-360119" | sudo tee /etc/hostname
	I1019 17:14:10.976497  308549 main.go:143] libmachine: SSH cmd err, output: <nil>: test-preload-360119
	
	I1019 17:14:10.976537  308549 main.go:143] libmachine: (test-preload-360119) Calling .GetSSHHostname
	I1019 17:14:10.979541  308549 main.go:143] libmachine: (test-preload-360119) DBG | domain test-preload-360119 has defined MAC address 52:54:00:de:63:9b in network mk-test-preload-360119
	I1019 17:14:10.979904  308549 main.go:143] libmachine: (test-preload-360119) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:63:9b", ip: ""} in network mk-test-preload-360119: {Iface:virbr1 ExpiryTime:2025-10-19 18:14:07 +0000 UTC Type:0 Mac:52:54:00:de:63:9b Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:test-preload-360119 Clientid:01:52:54:00:de:63:9b}
	I1019 17:14:10.979943  308549 main.go:143] libmachine: (test-preload-360119) DBG | domain test-preload-360119 has defined IP address 192.168.39.174 and MAC address 52:54:00:de:63:9b in network mk-test-preload-360119
	I1019 17:14:10.980169  308549 main.go:143] libmachine: (test-preload-360119) Calling .GetSSHPort
	I1019 17:14:10.980353  308549 main.go:143] libmachine: (test-preload-360119) Calling .GetSSHKeyPath
	I1019 17:14:10.980518  308549 main.go:143] libmachine: (test-preload-360119) Calling .GetSSHKeyPath
	I1019 17:14:10.980617  308549 main.go:143] libmachine: (test-preload-360119) Calling .GetSSHUsername
	I1019 17:14:10.980772  308549 main.go:143] libmachine: Using SSH client type: native
	I1019 17:14:10.981052  308549 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.174 22 <nil> <nil>}
	I1019 17:14:10.981079  308549 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\stest-preload-360119' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 test-preload-360119/g' /etc/hosts;
				else 
					echo '127.0.1.1 test-preload-360119' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1019 17:14:11.090488  308549 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1019 17:14:11.090521  308549 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21683-274250/.minikube CaCertPath:/home/jenkins/minikube-integration/21683-274250/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21683-274250/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21683-274250/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21683-274250/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21683-274250/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21683-274250/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21683-274250/.minikube}
	I1019 17:14:11.090547  308549 buildroot.go:174] setting up certificates
	I1019 17:14:11.090561  308549 provision.go:84] configureAuth start
	I1019 17:14:11.090576  308549 main.go:143] libmachine: (test-preload-360119) Calling .GetMachineName
	I1019 17:14:11.090897  308549 main.go:143] libmachine: (test-preload-360119) Calling .GetIP
	I1019 17:14:11.093735  308549 main.go:143] libmachine: (test-preload-360119) DBG | domain test-preload-360119 has defined MAC address 52:54:00:de:63:9b in network mk-test-preload-360119
	I1019 17:14:11.094107  308549 main.go:143] libmachine: (test-preload-360119) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:63:9b", ip: ""} in network mk-test-preload-360119: {Iface:virbr1 ExpiryTime:2025-10-19 18:14:07 +0000 UTC Type:0 Mac:52:54:00:de:63:9b Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:test-preload-360119 Clientid:01:52:54:00:de:63:9b}
	I1019 17:14:11.094143  308549 main.go:143] libmachine: (test-preload-360119) DBG | domain test-preload-360119 has defined IP address 192.168.39.174 and MAC address 52:54:00:de:63:9b in network mk-test-preload-360119
	I1019 17:14:11.094367  308549 main.go:143] libmachine: (test-preload-360119) Calling .GetSSHHostname
	I1019 17:14:11.096838  308549 main.go:143] libmachine: (test-preload-360119) DBG | domain test-preload-360119 has defined MAC address 52:54:00:de:63:9b in network mk-test-preload-360119
	I1019 17:14:11.097349  308549 main.go:143] libmachine: (test-preload-360119) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:63:9b", ip: ""} in network mk-test-preload-360119: {Iface:virbr1 ExpiryTime:2025-10-19 18:14:07 +0000 UTC Type:0 Mac:52:54:00:de:63:9b Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:test-preload-360119 Clientid:01:52:54:00:de:63:9b}
	I1019 17:14:11.097385  308549 main.go:143] libmachine: (test-preload-360119) DBG | domain test-preload-360119 has defined IP address 192.168.39.174 and MAC address 52:54:00:de:63:9b in network mk-test-preload-360119
	I1019 17:14:11.097577  308549 provision.go:143] copyHostCerts
	I1019 17:14:11.097649  308549 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-274250/.minikube/ca.pem, removing ...
	I1019 17:14:11.097667  308549 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-274250/.minikube/ca.pem
	I1019 17:14:11.097731  308549 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-274250/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21683-274250/.minikube/ca.pem (1082 bytes)
	I1019 17:14:11.097831  308549 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-274250/.minikube/cert.pem, removing ...
	I1019 17:14:11.097840  308549 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-274250/.minikube/cert.pem
	I1019 17:14:11.097868  308549 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-274250/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21683-274250/.minikube/cert.pem (1123 bytes)
	I1019 17:14:11.098001  308549 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-274250/.minikube/key.pem, removing ...
	I1019 17:14:11.098015  308549 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-274250/.minikube/key.pem
	I1019 17:14:11.098048  308549 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-274250/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21683-274250/.minikube/key.pem (1675 bytes)
	I1019 17:14:11.098137  308549 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21683-274250/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21683-274250/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21683-274250/.minikube/certs/ca-key.pem org=jenkins.test-preload-360119 san=[127.0.0.1 192.168.39.174 localhost minikube test-preload-360119]
	I1019 17:14:11.237214  308549 provision.go:177] copyRemoteCerts
	I1019 17:14:11.237291  308549 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1019 17:14:11.237322  308549 main.go:143] libmachine: (test-preload-360119) Calling .GetSSHHostname
	I1019 17:14:11.240046  308549 main.go:143] libmachine: (test-preload-360119) DBG | domain test-preload-360119 has defined MAC address 52:54:00:de:63:9b in network mk-test-preload-360119
	I1019 17:14:11.240417  308549 main.go:143] libmachine: (test-preload-360119) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:63:9b", ip: ""} in network mk-test-preload-360119: {Iface:virbr1 ExpiryTime:2025-10-19 18:14:07 +0000 UTC Type:0 Mac:52:54:00:de:63:9b Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:test-preload-360119 Clientid:01:52:54:00:de:63:9b}
	I1019 17:14:11.240444  308549 main.go:143] libmachine: (test-preload-360119) DBG | domain test-preload-360119 has defined IP address 192.168.39.174 and MAC address 52:54:00:de:63:9b in network mk-test-preload-360119
	I1019 17:14:11.240754  308549 main.go:143] libmachine: (test-preload-360119) Calling .GetSSHPort
	I1019 17:14:11.240952  308549 main.go:143] libmachine: (test-preload-360119) Calling .GetSSHKeyPath
	I1019 17:14:11.241183  308549 main.go:143] libmachine: (test-preload-360119) Calling .GetSSHUsername
	I1019 17:14:11.241362  308549 sshutil.go:53] new ssh client: &{IP:192.168.39.174 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21683-274250/.minikube/machines/test-preload-360119/id_rsa Username:docker}
	I1019 17:14:11.323863  308549 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-274250/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1019 17:14:11.350827  308549 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-274250/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1019 17:14:11.377743  308549 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-274250/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1019 17:14:11.403508  308549 provision.go:87] duration metric: took 312.931687ms to configureAuth
	I1019 17:14:11.403541  308549 buildroot.go:189] setting minikube options for container-runtime
	I1019 17:14:11.403705  308549 config.go:182] Loaded profile config "test-preload-360119": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1019 17:14:11.403779  308549 main.go:143] libmachine: (test-preload-360119) Calling .GetSSHHostname
	I1019 17:14:11.406562  308549 main.go:143] libmachine: (test-preload-360119) DBG | domain test-preload-360119 has defined MAC address 52:54:00:de:63:9b in network mk-test-preload-360119
	I1019 17:14:11.406962  308549 main.go:143] libmachine: (test-preload-360119) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:63:9b", ip: ""} in network mk-test-preload-360119: {Iface:virbr1 ExpiryTime:2025-10-19 18:14:07 +0000 UTC Type:0 Mac:52:54:00:de:63:9b Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:test-preload-360119 Clientid:01:52:54:00:de:63:9b}
	I1019 17:14:11.407002  308549 main.go:143] libmachine: (test-preload-360119) DBG | domain test-preload-360119 has defined IP address 192.168.39.174 and MAC address 52:54:00:de:63:9b in network mk-test-preload-360119
	I1019 17:14:11.407242  308549 main.go:143] libmachine: (test-preload-360119) Calling .GetSSHPort
	I1019 17:14:11.407447  308549 main.go:143] libmachine: (test-preload-360119) Calling .GetSSHKeyPath
	I1019 17:14:11.407609  308549 main.go:143] libmachine: (test-preload-360119) Calling .GetSSHKeyPath
	I1019 17:14:11.407783  308549 main.go:143] libmachine: (test-preload-360119) Calling .GetSSHUsername
	I1019 17:14:11.407948  308549 main.go:143] libmachine: Using SSH client type: native
	I1019 17:14:11.408221  308549 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.174 22 <nil> <nil>}
	I1019 17:14:11.408238  308549 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1019 17:14:11.645797  308549 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1019 17:14:11.645826  308549 machine.go:97] duration metric: took 897.50817ms to provisionDockerMachine
	I1019 17:14:11.645842  308549 start.go:293] postStartSetup for "test-preload-360119" (driver="kvm2")
	I1019 17:14:11.645855  308549 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1019 17:14:11.645879  308549 main.go:143] libmachine: (test-preload-360119) Calling .DriverName
	I1019 17:14:11.646264  308549 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1019 17:14:11.646299  308549 main.go:143] libmachine: (test-preload-360119) Calling .GetSSHHostname
	I1019 17:14:11.649743  308549 main.go:143] libmachine: (test-preload-360119) DBG | domain test-preload-360119 has defined MAC address 52:54:00:de:63:9b in network mk-test-preload-360119
	I1019 17:14:11.650144  308549 main.go:143] libmachine: (test-preload-360119) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:63:9b", ip: ""} in network mk-test-preload-360119: {Iface:virbr1 ExpiryTime:2025-10-19 18:14:07 +0000 UTC Type:0 Mac:52:54:00:de:63:9b Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:test-preload-360119 Clientid:01:52:54:00:de:63:9b}
	I1019 17:14:11.650172  308549 main.go:143] libmachine: (test-preload-360119) DBG | domain test-preload-360119 has defined IP address 192.168.39.174 and MAC address 52:54:00:de:63:9b in network mk-test-preload-360119
	I1019 17:14:11.650363  308549 main.go:143] libmachine: (test-preload-360119) Calling .GetSSHPort
	I1019 17:14:11.650572  308549 main.go:143] libmachine: (test-preload-360119) Calling .GetSSHKeyPath
	I1019 17:14:11.650737  308549 main.go:143] libmachine: (test-preload-360119) Calling .GetSSHUsername
	I1019 17:14:11.650903  308549 sshutil.go:53] new ssh client: &{IP:192.168.39.174 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21683-274250/.minikube/machines/test-preload-360119/id_rsa Username:docker}
	I1019 17:14:11.730961  308549 ssh_runner.go:195] Run: cat /etc/os-release
	I1019 17:14:11.735419  308549 info.go:137] Remote host: Buildroot 2025.02
	I1019 17:14:11.735441  308549 filesync.go:126] Scanning /home/jenkins/minikube-integration/21683-274250/.minikube/addons for local assets ...
	I1019 17:14:11.735512  308549 filesync.go:126] Scanning /home/jenkins/minikube-integration/21683-274250/.minikube/files for local assets ...
	I1019 17:14:11.735582  308549 filesync.go:149] local asset: /home/jenkins/minikube-integration/21683-274250/.minikube/files/etc/ssl/certs/2782802.pem -> 2782802.pem in /etc/ssl/certs
	I1019 17:14:11.735668  308549 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1019 17:14:11.746465  308549 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-274250/.minikube/files/etc/ssl/certs/2782802.pem --> /etc/ssl/certs/2782802.pem (1708 bytes)
	I1019 17:14:11.773474  308549 start.go:296] duration metric: took 127.613972ms for postStartSetup
	I1019 17:14:11.773511  308549 fix.go:56] duration metric: took 15.985609289s for fixHost
	I1019 17:14:11.773533  308549 main.go:143] libmachine: (test-preload-360119) Calling .GetSSHHostname
	I1019 17:14:11.776103  308549 main.go:143] libmachine: (test-preload-360119) DBG | domain test-preload-360119 has defined MAC address 52:54:00:de:63:9b in network mk-test-preload-360119
	I1019 17:14:11.776551  308549 main.go:143] libmachine: (test-preload-360119) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:63:9b", ip: ""} in network mk-test-preload-360119: {Iface:virbr1 ExpiryTime:2025-10-19 18:14:07 +0000 UTC Type:0 Mac:52:54:00:de:63:9b Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:test-preload-360119 Clientid:01:52:54:00:de:63:9b}
	I1019 17:14:11.776580  308549 main.go:143] libmachine: (test-preload-360119) DBG | domain test-preload-360119 has defined IP address 192.168.39.174 and MAC address 52:54:00:de:63:9b in network mk-test-preload-360119
	I1019 17:14:11.776795  308549 main.go:143] libmachine: (test-preload-360119) Calling .GetSSHPort
	I1019 17:14:11.777066  308549 main.go:143] libmachine: (test-preload-360119) Calling .GetSSHKeyPath
	I1019 17:14:11.777298  308549 main.go:143] libmachine: (test-preload-360119) Calling .GetSSHKeyPath
	I1019 17:14:11.777456  308549 main.go:143] libmachine: (test-preload-360119) Calling .GetSSHUsername
	I1019 17:14:11.777624  308549 main.go:143] libmachine: Using SSH client type: native
	I1019 17:14:11.777900  308549 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.174 22 <nil> <nil>}
	I1019 17:14:11.777915  308549 main.go:143] libmachine: About to run SSH command:
	date +%s.%N
	I1019 17:14:11.877596  308549 main.go:143] libmachine: SSH cmd err, output: <nil>: 1760894051.835553982
	
	I1019 17:14:11.877625  308549 fix.go:216] guest clock: 1760894051.835553982
	I1019 17:14:11.877634  308549 fix.go:229] Guest: 2025-10-19 17:14:11.835553982 +0000 UTC Remote: 2025-10-19 17:14:11.773514563 +0000 UTC m=+27.048763449 (delta=62.039419ms)
	I1019 17:14:11.877660  308549 fix.go:200] guest clock delta is within tolerance: 62.039419ms
	I1019 17:14:11.877668  308549 start.go:83] releasing machines lock for "test-preload-360119", held for 16.089795955s
	I1019 17:14:11.877697  308549 main.go:143] libmachine: (test-preload-360119) Calling .DriverName
	I1019 17:14:11.877924  308549 main.go:143] libmachine: (test-preload-360119) Calling .GetIP
	I1019 17:14:11.880901  308549 main.go:143] libmachine: (test-preload-360119) DBG | domain test-preload-360119 has defined MAC address 52:54:00:de:63:9b in network mk-test-preload-360119
	I1019 17:14:11.881301  308549 main.go:143] libmachine: (test-preload-360119) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:63:9b", ip: ""} in network mk-test-preload-360119: {Iface:virbr1 ExpiryTime:2025-10-19 18:14:07 +0000 UTC Type:0 Mac:52:54:00:de:63:9b Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:test-preload-360119 Clientid:01:52:54:00:de:63:9b}
	I1019 17:14:11.881333  308549 main.go:143] libmachine: (test-preload-360119) DBG | domain test-preload-360119 has defined IP address 192.168.39.174 and MAC address 52:54:00:de:63:9b in network mk-test-preload-360119
	I1019 17:14:11.881528  308549 main.go:143] libmachine: (test-preload-360119) Calling .DriverName
	I1019 17:14:11.882082  308549 main.go:143] libmachine: (test-preload-360119) Calling .DriverName
	I1019 17:14:11.882278  308549 main.go:143] libmachine: (test-preload-360119) Calling .DriverName
	I1019 17:14:11.882348  308549 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1019 17:14:11.882422  308549 main.go:143] libmachine: (test-preload-360119) Calling .GetSSHHostname
	I1019 17:14:11.882499  308549 ssh_runner.go:195] Run: cat /version.json
	I1019 17:14:11.882528  308549 main.go:143] libmachine: (test-preload-360119) Calling .GetSSHHostname
	I1019 17:14:11.885735  308549 main.go:143] libmachine: (test-preload-360119) DBG | domain test-preload-360119 has defined MAC address 52:54:00:de:63:9b in network mk-test-preload-360119
	I1019 17:14:11.886016  308549 main.go:143] libmachine: (test-preload-360119) DBG | domain test-preload-360119 has defined MAC address 52:54:00:de:63:9b in network mk-test-preload-360119
	I1019 17:14:11.886102  308549 main.go:143] libmachine: (test-preload-360119) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:63:9b", ip: ""} in network mk-test-preload-360119: {Iface:virbr1 ExpiryTime:2025-10-19 18:14:07 +0000 UTC Type:0 Mac:52:54:00:de:63:9b Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:test-preload-360119 Clientid:01:52:54:00:de:63:9b}
	I1019 17:14:11.886126  308549 main.go:143] libmachine: (test-preload-360119) DBG | domain test-preload-360119 has defined IP address 192.168.39.174 and MAC address 52:54:00:de:63:9b in network mk-test-preload-360119
	I1019 17:14:11.886374  308549 main.go:143] libmachine: (test-preload-360119) Calling .GetSSHPort
	I1019 17:14:11.886541  308549 main.go:143] libmachine: (test-preload-360119) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:63:9b", ip: ""} in network mk-test-preload-360119: {Iface:virbr1 ExpiryTime:2025-10-19 18:14:07 +0000 UTC Type:0 Mac:52:54:00:de:63:9b Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:test-preload-360119 Clientid:01:52:54:00:de:63:9b}
	I1019 17:14:11.886558  308549 main.go:143] libmachine: (test-preload-360119) DBG | domain test-preload-360119 has defined IP address 192.168.39.174 and MAC address 52:54:00:de:63:9b in network mk-test-preload-360119
	I1019 17:14:11.886573  308549 main.go:143] libmachine: (test-preload-360119) Calling .GetSSHKeyPath
	I1019 17:14:11.886804  308549 main.go:143] libmachine: (test-preload-360119) Calling .GetSSHPort
	I1019 17:14:11.886859  308549 main.go:143] libmachine: (test-preload-360119) Calling .GetSSHUsername
	I1019 17:14:11.886971  308549 main.go:143] libmachine: (test-preload-360119) Calling .GetSSHKeyPath
	I1019 17:14:11.887018  308549 sshutil.go:53] new ssh client: &{IP:192.168.39.174 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21683-274250/.minikube/machines/test-preload-360119/id_rsa Username:docker}
	I1019 17:14:11.887131  308549 main.go:143] libmachine: (test-preload-360119) Calling .GetSSHUsername
	I1019 17:14:11.887283  308549 sshutil.go:53] new ssh client: &{IP:192.168.39.174 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21683-274250/.minikube/machines/test-preload-360119/id_rsa Username:docker}
	I1019 17:14:11.983033  308549 ssh_runner.go:195] Run: systemctl --version
	I1019 17:14:11.989063  308549 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1019 17:14:12.130941  308549 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1019 17:14:12.138485  308549 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1019 17:14:12.138547  308549 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1019 17:14:12.157126  308549 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1019 17:14:12.157152  308549 start.go:496] detecting cgroup driver to use...
	I1019 17:14:12.157217  308549 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1019 17:14:12.174799  308549 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1019 17:14:12.190754  308549 docker.go:218] disabling cri-docker service (if available) ...
	I1019 17:14:12.190832  308549 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1019 17:14:12.206855  308549 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1019 17:14:12.222505  308549 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1019 17:14:12.362712  308549 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1019 17:14:12.571745  308549 docker.go:234] disabling docker service ...
	I1019 17:14:12.571817  308549 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1019 17:14:12.587676  308549 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1019 17:14:12.602086  308549 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1019 17:14:12.758313  308549 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1019 17:14:12.900387  308549 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1019 17:14:12.916190  308549 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1019 17:14:12.936606  308549 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1019 17:14:12.936671  308549 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 17:14:12.947904  308549 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1019 17:14:12.947974  308549 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 17:14:12.959090  308549 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 17:14:12.970278  308549 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 17:14:12.981421  308549 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1019 17:14:12.993074  308549 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 17:14:13.004363  308549 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 17:14:13.023359  308549 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 17:14:13.035165  308549 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1019 17:14:13.045100  308549 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1019 17:14:13.045159  308549 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1019 17:14:13.063879  308549 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1019 17:14:13.075105  308549 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1019 17:14:13.212789  308549 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1019 17:14:13.318331  308549 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1019 17:14:13.318425  308549 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1019 17:14:13.323588  308549 start.go:564] Will wait 60s for crictl version
	I1019 17:14:13.323653  308549 ssh_runner.go:195] Run: which crictl
	I1019 17:14:13.327465  308549 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1019 17:14:13.366843  308549 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1019 17:14:13.366924  308549 ssh_runner.go:195] Run: crio --version
	I1019 17:14:13.398994  308549 ssh_runner.go:195] Run: crio --version
	I1019 17:14:13.428800  308549 out.go:179] * Preparing Kubernetes v1.32.0 on CRI-O 1.29.1 ...
	I1019 17:14:13.429768  308549 main.go:143] libmachine: (test-preload-360119) Calling .GetIP
	I1019 17:14:13.432807  308549 main.go:143] libmachine: (test-preload-360119) DBG | domain test-preload-360119 has defined MAC address 52:54:00:de:63:9b in network mk-test-preload-360119
	I1019 17:14:13.433192  308549 main.go:143] libmachine: (test-preload-360119) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:63:9b", ip: ""} in network mk-test-preload-360119: {Iface:virbr1 ExpiryTime:2025-10-19 18:14:07 +0000 UTC Type:0 Mac:52:54:00:de:63:9b Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:test-preload-360119 Clientid:01:52:54:00:de:63:9b}
	I1019 17:14:13.433216  308549 main.go:143] libmachine: (test-preload-360119) DBG | domain test-preload-360119 has defined IP address 192.168.39.174 and MAC address 52:54:00:de:63:9b in network mk-test-preload-360119
	I1019 17:14:13.433485  308549 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1019 17:14:13.437878  308549 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1019 17:14:13.452155  308549 kubeadm.go:884] updating cluster {Name:test-preload-360119 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:
v1.32.0 ClusterName:test-preload-360119 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.174 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1019 17:14:13.452310  308549 preload.go:183] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I1019 17:14:13.452405  308549 ssh_runner.go:195] Run: sudo crictl images --output json
	I1019 17:14:13.490603  308549 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.32.0". assuming images are not preloaded.
	I1019 17:14:13.490692  308549 ssh_runner.go:195] Run: which lz4
	I1019 17:14:13.495037  308549 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1019 17:14:13.499587  308549 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1019 17:14:13.499622  308549 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-274250/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (398646650 bytes)
	I1019 17:14:14.911382  308549 crio.go:462] duration metric: took 1.416405704s to copy over tarball
	I1019 17:14:14.911465  308549 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1019 17:14:16.597864  308549 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.686372949s)
	I1019 17:14:16.597894  308549 crio.go:469] duration metric: took 1.68647638s to extract the tarball
	I1019 17:14:16.597902  308549 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1019 17:14:16.638791  308549 ssh_runner.go:195] Run: sudo crictl images --output json
	I1019 17:14:16.686081  308549 crio.go:514] all images are preloaded for cri-o runtime.
	I1019 17:14:16.686112  308549 cache_images.go:86] Images are preloaded, skipping loading
	I1019 17:14:16.686120  308549 kubeadm.go:935] updating node { 192.168.39.174 8443 v1.32.0 crio true true} ...
	I1019 17:14:16.686224  308549 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=test-preload-360119 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.174
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.0 ClusterName:test-preload-360119 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1019 17:14:16.686335  308549 ssh_runner.go:195] Run: crio config
	I1019 17:14:16.733879  308549 cni.go:84] Creating CNI manager for ""
	I1019 17:14:16.733904  308549 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1019 17:14:16.733927  308549 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1019 17:14:16.733949  308549 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.174 APIServerPort:8443 KubernetesVersion:v1.32.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:test-preload-360119 NodeName:test-preload-360119 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.174"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.174 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1019 17:14:16.734092  308549 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.174
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "test-preload-360119"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.174"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.174"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.32.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1019 17:14:16.734152  308549 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.0
	I1019 17:14:16.747003  308549 binaries.go:44] Found k8s binaries, skipping transfer
	I1019 17:14:16.747082  308549 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1019 17:14:16.757319  308549 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (319 bytes)
	I1019 17:14:16.777380  308549 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1019 17:14:16.795928  308549 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2222 bytes)
	I1019 17:14:16.814518  308549 ssh_runner.go:195] Run: grep 192.168.39.174	control-plane.minikube.internal$ /etc/hosts
	I1019 17:14:16.818191  308549 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.174	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1019 17:14:16.831272  308549 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1019 17:14:16.967070  308549 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1019 17:14:16.990156  308549 certs.go:69] Setting up /home/jenkins/minikube-integration/21683-274250/.minikube/profiles/test-preload-360119 for IP: 192.168.39.174
	I1019 17:14:16.990190  308549 certs.go:195] generating shared ca certs ...
	I1019 17:14:16.990210  308549 certs.go:227] acquiring lock for ca certs: {Name:mk7795547103f90561160e6fc6ada1c3a2cc6617 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 17:14:16.990380  308549 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21683-274250/.minikube/ca.key
	I1019 17:14:16.990426  308549 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21683-274250/.minikube/proxy-client-ca.key
	I1019 17:14:16.990437  308549 certs.go:257] generating profile certs ...
	I1019 17:14:16.990518  308549 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21683-274250/.minikube/profiles/test-preload-360119/client.key
	I1019 17:14:16.990585  308549 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21683-274250/.minikube/profiles/test-preload-360119/apiserver.key.ebafd9b5
	I1019 17:14:16.990667  308549 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21683-274250/.minikube/profiles/test-preload-360119/proxy-client.key
	I1019 17:14:16.990817  308549 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-274250/.minikube/certs/278280.pem (1338 bytes)
	W1019 17:14:16.990850  308549 certs.go:480] ignoring /home/jenkins/minikube-integration/21683-274250/.minikube/certs/278280_empty.pem, impossibly tiny 0 bytes
	I1019 17:14:16.990861  308549 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-274250/.minikube/certs/ca-key.pem (1675 bytes)
	I1019 17:14:16.990883  308549 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-274250/.minikube/certs/ca.pem (1082 bytes)
	I1019 17:14:16.990907  308549 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-274250/.minikube/certs/cert.pem (1123 bytes)
	I1019 17:14:16.990931  308549 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-274250/.minikube/certs/key.pem (1675 bytes)
	I1019 17:14:16.990977  308549 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-274250/.minikube/files/etc/ssl/certs/2782802.pem (1708 bytes)
	I1019 17:14:16.991561  308549 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-274250/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1019 17:14:17.024353  308549 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-274250/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1019 17:14:17.063454  308549 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-274250/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1019 17:14:17.091441  308549 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-274250/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1019 17:14:17.118536  308549 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-274250/.minikube/profiles/test-preload-360119/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1019 17:14:17.145327  308549 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-274250/.minikube/profiles/test-preload-360119/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1019 17:14:17.172180  308549 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-274250/.minikube/profiles/test-preload-360119/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1019 17:14:17.206455  308549 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-274250/.minikube/profiles/test-preload-360119/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1019 17:14:17.233927  308549 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-274250/.minikube/certs/278280.pem --> /usr/share/ca-certificates/278280.pem (1338 bytes)
	I1019 17:14:17.261304  308549 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-274250/.minikube/files/etc/ssl/certs/2782802.pem --> /usr/share/ca-certificates/2782802.pem (1708 bytes)
	I1019 17:14:17.289164  308549 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-274250/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1019 17:14:17.316503  308549 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1019 17:14:17.335287  308549 ssh_runner.go:195] Run: openssl version
	I1019 17:14:17.341257  308549 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1019 17:14:17.352743  308549 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1019 17:14:17.357263  308549 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 19 16:22 /usr/share/ca-certificates/minikubeCA.pem
	I1019 17:14:17.357306  308549 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1019 17:14:17.364003  308549 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1019 17:14:17.375512  308549 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/278280.pem && ln -fs /usr/share/ca-certificates/278280.pem /etc/ssl/certs/278280.pem"
	I1019 17:14:17.387025  308549 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/278280.pem
	I1019 17:14:17.391644  308549 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 19 16:31 /usr/share/ca-certificates/278280.pem
	I1019 17:14:17.391693  308549 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/278280.pem
	I1019 17:14:17.398156  308549 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/278280.pem /etc/ssl/certs/51391683.0"
	I1019 17:14:17.409656  308549 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2782802.pem && ln -fs /usr/share/ca-certificates/2782802.pem /etc/ssl/certs/2782802.pem"
	I1019 17:14:17.421492  308549 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2782802.pem
	I1019 17:14:17.426070  308549 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 19 16:31 /usr/share/ca-certificates/2782802.pem
	I1019 17:14:17.426129  308549 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2782802.pem
	I1019 17:14:17.432607  308549 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2782802.pem /etc/ssl/certs/3ec20f2e.0"
	I1019 17:14:17.444072  308549 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1019 17:14:17.448772  308549 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1019 17:14:17.455550  308549 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1019 17:14:17.462204  308549 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1019 17:14:17.468938  308549 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1019 17:14:17.475545  308549 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1019 17:14:17.481863  308549 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1019 17:14:17.488303  308549 kubeadm.go:401] StartCluster: {Name:test-preload-360119 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
32.0 ClusterName:test-preload-360119 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.174 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1019 17:14:17.488399  308549 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1019 17:14:17.488436  308549 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1019 17:14:17.526058  308549 cri.go:89] found id: ""
	I1019 17:14:17.526136  308549 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1019 17:14:17.537418  308549 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1019 17:14:17.537440  308549 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1019 17:14:17.537499  308549 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1019 17:14:17.548103  308549 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1019 17:14:17.548676  308549 kubeconfig.go:47] verify endpoint returned: get endpoint: "test-preload-360119" does not appear in /home/jenkins/minikube-integration/21683-274250/kubeconfig
	I1019 17:14:17.548836  308549 kubeconfig.go:62] /home/jenkins/minikube-integration/21683-274250/kubeconfig needs updating (will repair): [kubeconfig missing "test-preload-360119" cluster setting kubeconfig missing "test-preload-360119" context setting]
	I1019 17:14:17.549246  308549 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-274250/kubeconfig: {Name:mk22311d445eddc7a50c63a1389fab4cf9c803b2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 17:14:17.550005  308549 kapi.go:59] client config for test-preload-360119: &rest.Config{Host:"https://192.168.39.174:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21683-274250/.minikube/profiles/test-preload-360119/client.crt", KeyFile:"/home/jenkins/minikube-integration/21683-274250/.minikube/profiles/test-preload-360119/client.key", CAFile:"/home/jenkins/minikube-integration/21683-274250/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uin
t8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2819d20), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1019 17:14:17.550567  308549 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1019 17:14:17.550592  308549 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1019 17:14:17.550599  308549 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1019 17:14:17.550607  308549 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1019 17:14:17.550616  308549 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1019 17:14:17.551167  308549 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1019 17:14:17.561064  308549 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.39.174
	I1019 17:14:17.561098  308549 kubeadm.go:1161] stopping kube-system containers ...
	I1019 17:14:17.561120  308549 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1019 17:14:17.561165  308549 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1019 17:14:17.598598  308549 cri.go:89] found id: ""
	I1019 17:14:17.598664  308549 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1019 17:14:17.615566  308549 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1019 17:14:17.626380  308549 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1019 17:14:17.626410  308549 kubeadm.go:158] found existing configuration files:
	
	I1019 17:14:17.626446  308549 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1019 17:14:17.636234  308549 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1019 17:14:17.636289  308549 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1019 17:14:17.646936  308549 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1019 17:14:17.656874  308549 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1019 17:14:17.656933  308549 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1019 17:14:17.667277  308549 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1019 17:14:17.676969  308549 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1019 17:14:17.677039  308549 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1019 17:14:17.687350  308549 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1019 17:14:17.697266  308549 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1019 17:14:17.697319  308549 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1019 17:14:17.707497  308549 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1019 17:14:17.717901  308549 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1019 17:14:17.768271  308549 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1019 17:14:18.650428  308549 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1019 17:14:18.901218  308549 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1019 17:14:18.968779  308549 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1019 17:14:19.040761  308549 api_server.go:52] waiting for apiserver process to appear ...
	I1019 17:14:19.040865  308549 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1019 17:14:19.540916  308549 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1019 17:14:20.041965  308549 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1019 17:14:20.541946  308549 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1019 17:14:21.041302  308549 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1019 17:14:21.541012  308549 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1019 17:14:21.571703  308549 api_server.go:72] duration metric: took 2.530959828s to wait for apiserver process to appear ...
	I1019 17:14:21.571732  308549 api_server.go:88] waiting for apiserver healthz status ...
	I1019 17:14:21.571756  308549 api_server.go:253] Checking apiserver healthz at https://192.168.39.174:8443/healthz ...
	I1019 17:14:24.409708  308549 api_server.go:279] https://192.168.39.174:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1019 17:14:24.409743  308549 api_server.go:103] status: https://192.168.39.174:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1019 17:14:24.409759  308549 api_server.go:253] Checking apiserver healthz at https://192.168.39.174:8443/healthz ...
	I1019 17:14:24.450284  308549 api_server.go:279] https://192.168.39.174:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1019 17:14:24.450311  308549 api_server.go:103] status: https://192.168.39.174:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1019 17:14:24.572646  308549 api_server.go:253] Checking apiserver healthz at https://192.168.39.174:8443/healthz ...
	I1019 17:14:24.578310  308549 api_server.go:279] https://192.168.39.174:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1019 17:14:24.578338  308549 api_server.go:103] status: https://192.168.39.174:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1019 17:14:25.071991  308549 api_server.go:253] Checking apiserver healthz at https://192.168.39.174:8443/healthz ...
	I1019 17:14:25.080664  308549 api_server.go:279] https://192.168.39.174:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1019 17:14:25.080692  308549 api_server.go:103] status: https://192.168.39.174:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1019 17:14:25.572234  308549 api_server.go:253] Checking apiserver healthz at https://192.168.39.174:8443/healthz ...
	I1019 17:14:25.585430  308549 api_server.go:279] https://192.168.39.174:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1019 17:14:25.585462  308549 api_server.go:103] status: https://192.168.39.174:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1019 17:14:26.072686  308549 api_server.go:253] Checking apiserver healthz at https://192.168.39.174:8443/healthz ...
	I1019 17:14:26.077432  308549 api_server.go:279] https://192.168.39.174:8443/healthz returned 200:
	ok
	I1019 17:14:26.083443  308549 api_server.go:141] control plane version: v1.32.0
	I1019 17:14:26.083465  308549 api_server.go:131] duration metric: took 4.511726377s to wait for apiserver health ...
	I1019 17:14:26.083475  308549 cni.go:84] Creating CNI manager for ""
	I1019 17:14:26.083481  308549 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1019 17:14:26.085022  308549 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I1019 17:14:26.085969  308549 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1019 17:14:26.102871  308549 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1019 17:14:26.135443  308549 system_pods.go:43] waiting for kube-system pods to appear ...
	I1019 17:14:26.139574  308549 system_pods.go:59] 7 kube-system pods found
	I1019 17:14:26.139612  308549 system_pods.go:61] "coredns-668d6bf9bc-lpfq5" [87fb550d-0003-48ca-ab75-5ee6cad71963] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1019 17:14:26.139622  308549 system_pods.go:61] "etcd-test-preload-360119" [ff36bde1-a2a3-4700-bbb5-5fcbec277e2b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1019 17:14:26.139634  308549 system_pods.go:61] "kube-apiserver-test-preload-360119" [7004e4d8-eb94-4646-a030-9e906a9ea408] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1019 17:14:26.139642  308549 system_pods.go:61] "kube-controller-manager-test-preload-360119" [ce3b9394-934d-4c9a-97fc-e82825fe266c] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1019 17:14:26.139650  308549 system_pods.go:61] "kube-proxy-qtqvh" [fb36f695-65ea-4d5e-807a-cda5aca26c04] Running
	I1019 17:14:26.139662  308549 system_pods.go:61] "kube-scheduler-test-preload-360119" [966faf53-ffe8-4e80-8c8e-48ef7b370dd1] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1019 17:14:26.139668  308549 system_pods.go:61] "storage-provisioner" [5c585391-07bd-4c38-9a28-35410c69fd35] Running
	I1019 17:14:26.139679  308549 system_pods.go:74] duration metric: took 4.196841ms to wait for pod list to return data ...
	I1019 17:14:26.139692  308549 node_conditions.go:102] verifying NodePressure condition ...
	I1019 17:14:26.143434  308549 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1019 17:14:26.143458  308549 node_conditions.go:123] node cpu capacity is 2
	I1019 17:14:26.143469  308549 node_conditions.go:105] duration metric: took 3.772388ms to run NodePressure ...
	I1019 17:14:26.143517  308549 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1019 17:14:26.410902  308549 kubeadm.go:729] waiting for restarted kubelet to initialise ...
	I1019 17:14:26.418490  308549 kubeadm.go:744] kubelet initialised
	I1019 17:14:26.418518  308549 kubeadm.go:745] duration metric: took 7.586113ms waiting for restarted kubelet to initialise ...
	I1019 17:14:26.418538  308549 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1019 17:14:26.433356  308549 ops.go:34] apiserver oom_adj: -16
	I1019 17:14:26.433384  308549 kubeadm.go:602] duration metric: took 8.895935972s to restartPrimaryControlPlane
	I1019 17:14:26.433395  308549 kubeadm.go:403] duration metric: took 8.9451052s to StartCluster
	I1019 17:14:26.433418  308549 settings.go:142] acquiring lock: {Name:mkf8e8333d0302d1bf1fad4a2ff30b0524cb52b1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 17:14:26.433504  308549 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21683-274250/kubeconfig
	I1019 17:14:26.434095  308549 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-274250/kubeconfig: {Name:mk22311d445eddc7a50c63a1389fab4cf9c803b2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 17:14:26.434357  308549 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.39.174 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1019 17:14:26.434451  308549 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1019 17:14:26.434551  308549 config.go:182] Loaded profile config "test-preload-360119": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1019 17:14:26.434560  308549 addons.go:70] Setting default-storageclass=true in profile "test-preload-360119"
	I1019 17:14:26.434551  308549 addons.go:70] Setting storage-provisioner=true in profile "test-preload-360119"
	I1019 17:14:26.434596  308549 addons.go:239] Setting addon storage-provisioner=true in "test-preload-360119"
	I1019 17:14:26.434578  308549 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "test-preload-360119"
	W1019 17:14:26.434612  308549 addons.go:248] addon storage-provisioner should already be in state true
	I1019 17:14:26.434647  308549 host.go:66] Checking if "test-preload-360119" exists ...
	I1019 17:14:26.434956  308549 main.go:143] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1019 17:14:26.435009  308549 main.go:143] libmachine: Launching plugin server for driver kvm2
	I1019 17:14:26.435084  308549 main.go:143] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1019 17:14:26.435142  308549 main.go:143] libmachine: Launching plugin server for driver kvm2
	I1019 17:14:26.435586  308549 out.go:179] * Verifying Kubernetes components...
	I1019 17:14:26.436552  308549 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1019 17:14:26.448777  308549 main.go:143] libmachine: Plugin server listening at address 127.0.0.1:43043
	I1019 17:14:26.448778  308549 main.go:143] libmachine: Plugin server listening at address 127.0.0.1:43563
	I1019 17:14:26.449298  308549 main.go:143] libmachine: () Calling .GetVersion
	I1019 17:14:26.449357  308549 main.go:143] libmachine: () Calling .GetVersion
	I1019 17:14:26.449797  308549 main.go:143] libmachine: Using API Version  1
	I1019 17:14:26.449822  308549 main.go:143] libmachine: () Calling .SetConfigRaw
	I1019 17:14:26.449800  308549 main.go:143] libmachine: Using API Version  1
	I1019 17:14:26.449845  308549 main.go:143] libmachine: () Calling .SetConfigRaw
	I1019 17:14:26.450249  308549 main.go:143] libmachine: () Calling .GetMachineName
	I1019 17:14:26.450275  308549 main.go:143] libmachine: () Calling .GetMachineName
	I1019 17:14:26.450482  308549 main.go:143] libmachine: (test-preload-360119) Calling .GetState
	I1019 17:14:26.450789  308549 main.go:143] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1019 17:14:26.450825  308549 main.go:143] libmachine: Launching plugin server for driver kvm2
	I1019 17:14:26.452965  308549 kapi.go:59] client config for test-preload-360119: &rest.Config{Host:"https://192.168.39.174:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21683-274250/.minikube/profiles/test-preload-360119/client.crt", KeyFile:"/home/jenkins/minikube-integration/21683-274250/.minikube/profiles/test-preload-360119/client.key", CAFile:"/home/jenkins/minikube-integration/21683-274250/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uin
t8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2819d20), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1019 17:14:26.453346  308549 addons.go:239] Setting addon default-storageclass=true in "test-preload-360119"
	W1019 17:14:26.453368  308549 addons.go:248] addon default-storageclass should already be in state true
	I1019 17:14:26.453399  308549 host.go:66] Checking if "test-preload-360119" exists ...
	I1019 17:14:26.453758  308549 main.go:143] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1019 17:14:26.453805  308549 main.go:143] libmachine: Launching plugin server for driver kvm2
	I1019 17:14:26.466943  308549 main.go:143] libmachine: Plugin server listening at address 127.0.0.1:46583
	I1019 17:14:26.467391  308549 main.go:143] libmachine: () Calling .GetVersion
	I1019 17:14:26.467852  308549 main.go:143] libmachine: Using API Version  1
	I1019 17:14:26.467878  308549 main.go:143] libmachine: () Calling .SetConfigRaw
	I1019 17:14:26.468309  308549 main.go:143] libmachine: () Calling .GetMachineName
	I1019 17:14:26.468562  308549 main.go:143] libmachine: Plugin server listening at address 127.0.0.1:43343
	I1019 17:14:26.468825  308549 main.go:143] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1019 17:14:26.468876  308549 main.go:143] libmachine: Launching plugin server for driver kvm2
	I1019 17:14:26.468954  308549 main.go:143] libmachine: () Calling .GetVersion
	I1019 17:14:26.469371  308549 main.go:143] libmachine: Using API Version  1
	I1019 17:14:26.469393  308549 main.go:143] libmachine: () Calling .SetConfigRaw
	I1019 17:14:26.469758  308549 main.go:143] libmachine: () Calling .GetMachineName
	I1019 17:14:26.469958  308549 main.go:143] libmachine: (test-preload-360119) Calling .GetState
	I1019 17:14:26.471959  308549 main.go:143] libmachine: (test-preload-360119) Calling .DriverName
	I1019 17:14:26.476088  308549 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1019 17:14:26.477085  308549 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1019 17:14:26.477103  308549 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1019 17:14:26.477127  308549 main.go:143] libmachine: (test-preload-360119) Calling .GetSSHHostname
	I1019 17:14:26.480444  308549 main.go:143] libmachine: (test-preload-360119) DBG | domain test-preload-360119 has defined MAC address 52:54:00:de:63:9b in network mk-test-preload-360119
	I1019 17:14:26.480923  308549 main.go:143] libmachine: (test-preload-360119) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:63:9b", ip: ""} in network mk-test-preload-360119: {Iface:virbr1 ExpiryTime:2025-10-19 18:14:07 +0000 UTC Type:0 Mac:52:54:00:de:63:9b Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:test-preload-360119 Clientid:01:52:54:00:de:63:9b}
	I1019 17:14:26.480952  308549 main.go:143] libmachine: (test-preload-360119) DBG | domain test-preload-360119 has defined IP address 192.168.39.174 and MAC address 52:54:00:de:63:9b in network mk-test-preload-360119
	I1019 17:14:26.481173  308549 main.go:143] libmachine: (test-preload-360119) Calling .GetSSHPort
	I1019 17:14:26.481394  308549 main.go:143] libmachine: (test-preload-360119) Calling .GetSSHKeyPath
	I1019 17:14:26.481608  308549 main.go:143] libmachine: (test-preload-360119) Calling .GetSSHUsername
	I1019 17:14:26.481799  308549 sshutil.go:53] new ssh client: &{IP:192.168.39.174 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21683-274250/.minikube/machines/test-preload-360119/id_rsa Username:docker}
	I1019 17:14:26.483766  308549 main.go:143] libmachine: Plugin server listening at address 127.0.0.1:36093
	I1019 17:14:26.484182  308549 main.go:143] libmachine: () Calling .GetVersion
	I1019 17:14:26.484605  308549 main.go:143] libmachine: Using API Version  1
	I1019 17:14:26.484640  308549 main.go:143] libmachine: () Calling .SetConfigRaw
	I1019 17:14:26.484970  308549 main.go:143] libmachine: () Calling .GetMachineName
	I1019 17:14:26.485200  308549 main.go:143] libmachine: (test-preload-360119) Calling .GetState
	I1019 17:14:26.486736  308549 main.go:143] libmachine: (test-preload-360119) Calling .DriverName
	I1019 17:14:26.486962  308549 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1019 17:14:26.486976  308549 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1019 17:14:26.487004  308549 main.go:143] libmachine: (test-preload-360119) Calling .GetSSHHostname
	I1019 17:14:26.490031  308549 main.go:143] libmachine: (test-preload-360119) DBG | domain test-preload-360119 has defined MAC address 52:54:00:de:63:9b in network mk-test-preload-360119
	I1019 17:14:26.490533  308549 main.go:143] libmachine: (test-preload-360119) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:63:9b", ip: ""} in network mk-test-preload-360119: {Iface:virbr1 ExpiryTime:2025-10-19 18:14:07 +0000 UTC Type:0 Mac:52:54:00:de:63:9b Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:test-preload-360119 Clientid:01:52:54:00:de:63:9b}
	I1019 17:14:26.490568  308549 main.go:143] libmachine: (test-preload-360119) DBG | domain test-preload-360119 has defined IP address 192.168.39.174 and MAC address 52:54:00:de:63:9b in network mk-test-preload-360119
	I1019 17:14:26.490759  308549 main.go:143] libmachine: (test-preload-360119) Calling .GetSSHPort
	I1019 17:14:26.490951  308549 main.go:143] libmachine: (test-preload-360119) Calling .GetSSHKeyPath
	I1019 17:14:26.491133  308549 main.go:143] libmachine: (test-preload-360119) Calling .GetSSHUsername
	I1019 17:14:26.491294  308549 sshutil.go:53] new ssh client: &{IP:192.168.39.174 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21683-274250/.minikube/machines/test-preload-360119/id_rsa Username:docker}
	I1019 17:14:26.681936  308549 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1019 17:14:26.702871  308549 node_ready.go:35] waiting up to 6m0s for node "test-preload-360119" to be "Ready" ...
	I1019 17:14:26.705545  308549 node_ready.go:49] node "test-preload-360119" is "Ready"
	I1019 17:14:26.705575  308549 node_ready.go:38] duration metric: took 2.623417ms for node "test-preload-360119" to be "Ready" ...
	I1019 17:14:26.705589  308549 api_server.go:52] waiting for apiserver process to appear ...
	I1019 17:14:26.705642  308549 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1019 17:14:26.726894  308549 api_server.go:72] duration metric: took 292.48006ms to wait for apiserver process to appear ...
	I1019 17:14:26.726915  308549 api_server.go:88] waiting for apiserver healthz status ...
	I1019 17:14:26.726930  308549 api_server.go:253] Checking apiserver healthz at https://192.168.39.174:8443/healthz ...
	I1019 17:14:26.732796  308549 api_server.go:279] https://192.168.39.174:8443/healthz returned 200:
	ok
	I1019 17:14:26.733669  308549 api_server.go:141] control plane version: v1.32.0
	I1019 17:14:26.733694  308549 api_server.go:131] duration metric: took 6.772656ms to wait for apiserver health ...
	I1019 17:14:26.733709  308549 system_pods.go:43] waiting for kube-system pods to appear ...
	I1019 17:14:26.737919  308549 system_pods.go:59] 7 kube-system pods found
	I1019 17:14:26.737943  308549 system_pods.go:61] "coredns-668d6bf9bc-lpfq5" [87fb550d-0003-48ca-ab75-5ee6cad71963] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1019 17:14:26.737949  308549 system_pods.go:61] "etcd-test-preload-360119" [ff36bde1-a2a3-4700-bbb5-5fcbec277e2b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1019 17:14:26.737960  308549 system_pods.go:61] "kube-apiserver-test-preload-360119" [7004e4d8-eb94-4646-a030-9e906a9ea408] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1019 17:14:26.737971  308549 system_pods.go:61] "kube-controller-manager-test-preload-360119" [ce3b9394-934d-4c9a-97fc-e82825fe266c] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1019 17:14:26.737991  308549 system_pods.go:61] "kube-proxy-qtqvh" [fb36f695-65ea-4d5e-807a-cda5aca26c04] Running
	I1019 17:14:26.738000  308549 system_pods.go:61] "kube-scheduler-test-preload-360119" [966faf53-ffe8-4e80-8c8e-48ef7b370dd1] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1019 17:14:26.738005  308549 system_pods.go:61] "storage-provisioner" [5c585391-07bd-4c38-9a28-35410c69fd35] Running
	I1019 17:14:26.738013  308549 system_pods.go:74] duration metric: took 4.294896ms to wait for pod list to return data ...
	I1019 17:14:26.738021  308549 default_sa.go:34] waiting for default service account to be created ...
	I1019 17:14:26.739837  308549 default_sa.go:45] found service account: "default"
	I1019 17:14:26.739866  308549 default_sa.go:55] duration metric: took 1.837713ms for default service account to be created ...
	I1019 17:14:26.739879  308549 system_pods.go:116] waiting for k8s-apps to be running ...
	I1019 17:14:26.742751  308549 system_pods.go:86] 7 kube-system pods found
	I1019 17:14:26.742778  308549 system_pods.go:89] "coredns-668d6bf9bc-lpfq5" [87fb550d-0003-48ca-ab75-5ee6cad71963] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1019 17:14:26.742787  308549 system_pods.go:89] "etcd-test-preload-360119" [ff36bde1-a2a3-4700-bbb5-5fcbec277e2b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1019 17:14:26.742796  308549 system_pods.go:89] "kube-apiserver-test-preload-360119" [7004e4d8-eb94-4646-a030-9e906a9ea408] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1019 17:14:26.742803  308549 system_pods.go:89] "kube-controller-manager-test-preload-360119" [ce3b9394-934d-4c9a-97fc-e82825fe266c] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1019 17:14:26.742809  308549 system_pods.go:89] "kube-proxy-qtqvh" [fb36f695-65ea-4d5e-807a-cda5aca26c04] Running
	I1019 17:14:26.742818  308549 system_pods.go:89] "kube-scheduler-test-preload-360119" [966faf53-ffe8-4e80-8c8e-48ef7b370dd1] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1019 17:14:26.742822  308549 system_pods.go:89] "storage-provisioner" [5c585391-07bd-4c38-9a28-35410c69fd35] Running
	I1019 17:14:26.742832  308549 system_pods.go:126] duration metric: took 2.946175ms to wait for k8s-apps to be running ...
	I1019 17:14:26.742838  308549 system_svc.go:44] waiting for kubelet service to be running ....
	I1019 17:14:26.742885  308549 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1019 17:14:26.759616  308549 system_svc.go:56] duration metric: took 16.773432ms WaitForService to wait for kubelet
	I1019 17:14:26.759633  308549 kubeadm.go:587] duration metric: took 325.224051ms to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1019 17:14:26.759652  308549 node_conditions.go:102] verifying NodePressure condition ...
	I1019 17:14:26.761967  308549 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1019 17:14:26.761996  308549 node_conditions.go:123] node cpu capacity is 2
	I1019 17:14:26.762008  308549 node_conditions.go:105] duration metric: took 2.350925ms to run NodePressure ...
	I1019 17:14:26.762025  308549 start.go:242] waiting for startup goroutines ...
	I1019 17:14:26.782014  308549 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1019 17:14:26.813563  308549 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1019 17:14:27.398612  308549 main.go:143] libmachine: Making call to close driver server
	I1019 17:14:27.398642  308549 main.go:143] libmachine: (test-preload-360119) Calling .Close
	I1019 17:14:27.398672  308549 main.go:143] libmachine: Making call to close driver server
	I1019 17:14:27.398697  308549 main.go:143] libmachine: (test-preload-360119) Calling .Close
	I1019 17:14:27.398975  308549 main.go:143] libmachine: Successfully made call to close driver server
	I1019 17:14:27.398987  308549 main.go:143] libmachine: Successfully made call to close driver server
	I1019 17:14:27.399000  308549 main.go:143] libmachine: Making call to close connection to plugin binary
	I1019 17:14:27.399007  308549 main.go:143] libmachine: Making call to close connection to plugin binary
	I1019 17:14:27.399008  308549 main.go:143] libmachine: Making call to close driver server
	I1019 17:14:27.399006  308549 main.go:143] libmachine: (test-preload-360119) DBG | Closing plugin on server side
	I1019 17:14:27.399017  308549 main.go:143] libmachine: (test-preload-360119) Calling .Close
	I1019 17:14:27.398975  308549 main.go:143] libmachine: (test-preload-360119) DBG | Closing plugin on server side
	I1019 17:14:27.399015  308549 main.go:143] libmachine: Making call to close driver server
	I1019 17:14:27.399076  308549 main.go:143] libmachine: (test-preload-360119) Calling .Close
	I1019 17:14:27.399236  308549 main.go:143] libmachine: Successfully made call to close driver server
	I1019 17:14:27.399252  308549 main.go:143] libmachine: Making call to close connection to plugin binary
	I1019 17:14:27.399297  308549 main.go:143] libmachine: Successfully made call to close driver server
	I1019 17:14:27.399309  308549 main.go:143] libmachine: Making call to close connection to plugin binary
	I1019 17:14:27.399327  308549 main.go:143] libmachine: (test-preload-360119) DBG | Closing plugin on server side
	I1019 17:14:27.404796  308549 main.go:143] libmachine: Making call to close driver server
	I1019 17:14:27.404810  308549 main.go:143] libmachine: (test-preload-360119) Calling .Close
	I1019 17:14:27.405021  308549 main.go:143] libmachine: Successfully made call to close driver server
	I1019 17:14:27.405035  308549 main.go:143] libmachine: Making call to close connection to plugin binary
	I1019 17:14:27.405059  308549 main.go:143] libmachine: (test-preload-360119) DBG | Closing plugin on server side
	I1019 17:14:27.407170  308549 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1019 17:14:27.408045  308549 addons.go:515] duration metric: took 973.59861ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1019 17:14:27.408084  308549 start.go:247] waiting for cluster config update ...
	I1019 17:14:27.408101  308549 start.go:256] writing updated cluster config ...
	I1019 17:14:27.408340  308549 ssh_runner.go:195] Run: rm -f paused
	I1019 17:14:27.413538  308549 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1019 17:14:27.414001  308549 kapi.go:59] client config for test-preload-360119: &rest.Config{Host:"https://192.168.39.174:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21683-274250/.minikube/profiles/test-preload-360119/client.crt", KeyFile:"/home/jenkins/minikube-integration/21683-274250/.minikube/profiles/test-preload-360119/client.key", CAFile:"/home/jenkins/minikube-integration/21683-274250/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uin
t8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2819d20), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1019 17:14:27.416741  308549 pod_ready.go:83] waiting for pod "coredns-668d6bf9bc-lpfq5" in "kube-system" namespace to be "Ready" or be gone ...
	W1019 17:14:29.421592  308549 pod_ready.go:104] pod "coredns-668d6bf9bc-lpfq5" is not "Ready", error: <nil>
	W1019 17:14:31.422850  308549 pod_ready.go:104] pod "coredns-668d6bf9bc-lpfq5" is not "Ready", error: <nil>
	I1019 17:14:33.422905  308549 pod_ready.go:94] pod "coredns-668d6bf9bc-lpfq5" is "Ready"
	I1019 17:14:33.422946  308549 pod_ready.go:86] duration metric: took 6.006177905s for pod "coredns-668d6bf9bc-lpfq5" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 17:14:33.425528  308549 pod_ready.go:83] waiting for pod "etcd-test-preload-360119" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 17:14:33.430306  308549 pod_ready.go:94] pod "etcd-test-preload-360119" is "Ready"
	I1019 17:14:33.430338  308549 pod_ready.go:86] duration metric: took 4.776195ms for pod "etcd-test-preload-360119" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 17:14:33.432230  308549 pod_ready.go:83] waiting for pod "kube-apiserver-test-preload-360119" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 17:14:34.438059  308549 pod_ready.go:94] pod "kube-apiserver-test-preload-360119" is "Ready"
	I1019 17:14:34.438093  308549 pod_ready.go:86] duration metric: took 1.005844501s for pod "kube-apiserver-test-preload-360119" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 17:14:34.440052  308549 pod_ready.go:83] waiting for pod "kube-controller-manager-test-preload-360119" in "kube-system" namespace to be "Ready" or be gone ...
	W1019 17:14:36.445183  308549 pod_ready.go:104] pod "kube-controller-manager-test-preload-360119" is not "Ready", error: <nil>
	W1019 17:14:38.445444  308549 pod_ready.go:104] pod "kube-controller-manager-test-preload-360119" is not "Ready", error: <nil>
	I1019 17:14:39.446292  308549 pod_ready.go:94] pod "kube-controller-manager-test-preload-360119" is "Ready"
	I1019 17:14:39.446322  308549 pod_ready.go:86] duration metric: took 5.00624128s for pod "kube-controller-manager-test-preload-360119" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 17:14:39.448645  308549 pod_ready.go:83] waiting for pod "kube-proxy-qtqvh" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 17:14:39.452841  308549 pod_ready.go:94] pod "kube-proxy-qtqvh" is "Ready"
	I1019 17:14:39.452860  308549 pod_ready.go:86] duration metric: took 4.189027ms for pod "kube-proxy-qtqvh" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 17:14:39.454905  308549 pod_ready.go:83] waiting for pod "kube-scheduler-test-preload-360119" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 17:14:39.620595  308549 pod_ready.go:94] pod "kube-scheduler-test-preload-360119" is "Ready"
	I1019 17:14:39.620629  308549 pod_ready.go:86] duration metric: took 165.698388ms for pod "kube-scheduler-test-preload-360119" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 17:14:39.620646  308549 pod_ready.go:40] duration metric: took 12.207082932s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1019 17:14:39.663305  308549 start.go:628] kubectl: 1.34.1, cluster: 1.32.0 (minor skew: 2)
	I1019 17:14:39.664732  308549 out.go:203] 
	W1019 17:14:39.665709  308549 out.go:285] ! /usr/local/bin/kubectl is version 1.34.1, which may have incompatibilities with Kubernetes 1.32.0.
	I1019 17:14:39.666699  308549 out.go:179]   - Want kubectl v1.32.0? Try 'minikube kubectl -- get pods -A'
	I1019 17:14:39.667961  308549 out.go:179] * Done! kubectl is now configured to use "test-preload-360119" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 19 17:14:40 test-preload-360119 crio[833]: time="2025-10-19 17:14:40.502782837Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1760894080502760207,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a9325955-17c3-4208-bbfc-6b3cb53a39fe name=/runtime.v1.ImageService/ImageFsInfo
	Oct 19 17:14:40 test-preload-360119 crio[833]: time="2025-10-19 17:14:40.503346918Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=80ebf410-133e-4695-9a0b-78ebfd0bdcef name=/runtime.v1.RuntimeService/ListContainers
	Oct 19 17:14:40 test-preload-360119 crio[833]: time="2025-10-19 17:14:40.503403229Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=80ebf410-133e-4695-9a0b-78ebfd0bdcef name=/runtime.v1.RuntimeService/ListContainers
	Oct 19 17:14:40 test-preload-360119 crio[833]: time="2025-10-19 17:14:40.503580612Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6e726f28182e8f3557a32d98cb07442012e11062c1829852c65114a7fed0dfb7,PodSandboxId:b522c653cfcd2e909dc13304c3d87c5b76a347ab72eea1a4477f810b22c3ae5c,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1760894068986879532,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-lpfq5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 87fb550d-0003-48ca-ab75-5ee6cad71963,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d179b9cd9243be2d020b6aa75d47f0cc9491d55ff7eccfeaf273389961253dc1,PodSandboxId:9dd36be70347461764a856a46ab55af7aba504e0066194bbddd9626e456dd8c9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,State:CONTAINER_RUNNING,CreatedAt:1760894065499949967,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qtqvh,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: fb36f695-65ea-4d5e-807a-cda5aca26c04,},Annotations:map[string]string{io.kubernetes.container.hash: 8f247ea6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:31bf448487965183d9f4498bba3af5f3ef5260c39368c176ab5eafed7a7b6b27,PodSandboxId:74a55d2dc6a0ecd889392799930b0e78cfce6d83155160925760c0bb648b81b9,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1760894065471017541,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5c
585391-07bd-4c38-9a28-35410c69fd35,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:75eacb29493e5ee14b22af02fb7cb9bfaeda763d021892c9946187079a57a459,PodSandboxId:b69c388be64c1ab61940fd9009dcbdfbd49b42479d24cbb4c42d3622725853ca,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,State:CONTAINER_RUNNING,CreatedAt:1760894061127672314,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-360119,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2747761a0
5af5da0634935b8cb5ba1c9,},Annotations:map[string]string{io.kubernetes.container.hash: 8c4b12d6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fcca5546861402b93182980dc280d0bc0ae41f3efbd30040ebdd3c6772054376,PodSandboxId:d56c5aa0f2c859056b98ea0ec697e096622871434afd6b5bac029625a2a13a3a,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1760894061163256790,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-360119,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 75ef1d6835e1ba6290b172cb9a650ec6,},Annotations:map
[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7dea6d15b1c9725feb919241c1d727ae580eba5a38f855fbdcb0be9a1d98cb79,PodSandboxId:cf656ca346d63ec3f38382d6b1a6b96f266a5bdad76198c2049e4389599657b3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,State:CONTAINER_RUNNING,CreatedAt:1760894061144081677,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-360119,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 18df643007720ab5be34234813377087,},Annotations:map[string]str
ing{io.kubernetes.container.hash: bf915d6a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ecfa2635201ed0bb42973429369eec553be40da43e9e4e66ec707272156feb10,PodSandboxId:69988e5d949c8791f7062968908c11fc8629c9026f0ed0a927b0d167b4999f79,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,State:CONTAINER_RUNNING,CreatedAt:1760894061106444308,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-360119,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ee926024638120593a4e4b57fee1851,},Annotation
s:map[string]string{io.kubernetes.container.hash: 99f3a73e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=80ebf410-133e-4695-9a0b-78ebfd0bdcef name=/runtime.v1.RuntimeService/ListContainers
	Oct 19 17:14:40 test-preload-360119 crio[833]: time="2025-10-19 17:14:40.540222118Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=88017364-6864-4052-a5be-871ddb525840 name=/runtime.v1.RuntimeService/Version
	Oct 19 17:14:40 test-preload-360119 crio[833]: time="2025-10-19 17:14:40.540302361Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=88017364-6864-4052-a5be-871ddb525840 name=/runtime.v1.RuntimeService/Version
	Oct 19 17:14:40 test-preload-360119 crio[833]: time="2025-10-19 17:14:40.541421054Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=613ae0b5-ca92-47b5-9e0f-0407bcb5afa5 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 19 17:14:40 test-preload-360119 crio[833]: time="2025-10-19 17:14:40.541867810Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1760894080541846308,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=613ae0b5-ca92-47b5-9e0f-0407bcb5afa5 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 19 17:14:40 test-preload-360119 crio[833]: time="2025-10-19 17:14:40.542418659Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5577f34e-d7c4-490f-aa85-082b59fde7c7 name=/runtime.v1.RuntimeService/ListContainers
	Oct 19 17:14:40 test-preload-360119 crio[833]: time="2025-10-19 17:14:40.542525544Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5577f34e-d7c4-490f-aa85-082b59fde7c7 name=/runtime.v1.RuntimeService/ListContainers
	Oct 19 17:14:40 test-preload-360119 crio[833]: time="2025-10-19 17:14:40.542674842Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6e726f28182e8f3557a32d98cb07442012e11062c1829852c65114a7fed0dfb7,PodSandboxId:b522c653cfcd2e909dc13304c3d87c5b76a347ab72eea1a4477f810b22c3ae5c,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1760894068986879532,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-lpfq5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 87fb550d-0003-48ca-ab75-5ee6cad71963,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d179b9cd9243be2d020b6aa75d47f0cc9491d55ff7eccfeaf273389961253dc1,PodSandboxId:9dd36be70347461764a856a46ab55af7aba504e0066194bbddd9626e456dd8c9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,State:CONTAINER_RUNNING,CreatedAt:1760894065499949967,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qtqvh,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: fb36f695-65ea-4d5e-807a-cda5aca26c04,},Annotations:map[string]string{io.kubernetes.container.hash: 8f247ea6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:31bf448487965183d9f4498bba3af5f3ef5260c39368c176ab5eafed7a7b6b27,PodSandboxId:74a55d2dc6a0ecd889392799930b0e78cfce6d83155160925760c0bb648b81b9,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1760894065471017541,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5c
585391-07bd-4c38-9a28-35410c69fd35,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:75eacb29493e5ee14b22af02fb7cb9bfaeda763d021892c9946187079a57a459,PodSandboxId:b69c388be64c1ab61940fd9009dcbdfbd49b42479d24cbb4c42d3622725853ca,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,State:CONTAINER_RUNNING,CreatedAt:1760894061127672314,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-360119,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2747761a0
5af5da0634935b8cb5ba1c9,},Annotations:map[string]string{io.kubernetes.container.hash: 8c4b12d6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fcca5546861402b93182980dc280d0bc0ae41f3efbd30040ebdd3c6772054376,PodSandboxId:d56c5aa0f2c859056b98ea0ec697e096622871434afd6b5bac029625a2a13a3a,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1760894061163256790,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-360119,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 75ef1d6835e1ba6290b172cb9a650ec6,},Annotations:map
[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7dea6d15b1c9725feb919241c1d727ae580eba5a38f855fbdcb0be9a1d98cb79,PodSandboxId:cf656ca346d63ec3f38382d6b1a6b96f266a5bdad76198c2049e4389599657b3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,State:CONTAINER_RUNNING,CreatedAt:1760894061144081677,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-360119,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 18df643007720ab5be34234813377087,},Annotations:map[string]str
ing{io.kubernetes.container.hash: bf915d6a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ecfa2635201ed0bb42973429369eec553be40da43e9e4e66ec707272156feb10,PodSandboxId:69988e5d949c8791f7062968908c11fc8629c9026f0ed0a927b0d167b4999f79,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,State:CONTAINER_RUNNING,CreatedAt:1760894061106444308,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-360119,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ee926024638120593a4e4b57fee1851,},Annotation
s:map[string]string{io.kubernetes.container.hash: 99f3a73e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=5577f34e-d7c4-490f-aa85-082b59fde7c7 name=/runtime.v1.RuntimeService/ListContainers
	Oct 19 17:14:40 test-preload-360119 crio[833]: time="2025-10-19 17:14:40.579392743Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=5ada825d-7261-450d-8f9f-7712163115d7 name=/runtime.v1.RuntimeService/Version
	Oct 19 17:14:40 test-preload-360119 crio[833]: time="2025-10-19 17:14:40.579477455Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=5ada825d-7261-450d-8f9f-7712163115d7 name=/runtime.v1.RuntimeService/Version
	Oct 19 17:14:40 test-preload-360119 crio[833]: time="2025-10-19 17:14:40.580326524Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=58430d4b-f928-463a-8470-c05d7566464d name=/runtime.v1.ImageService/ImageFsInfo
	Oct 19 17:14:40 test-preload-360119 crio[833]: time="2025-10-19 17:14:40.580775684Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1760894080580755437,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=58430d4b-f928-463a-8470-c05d7566464d name=/runtime.v1.ImageService/ImageFsInfo
	Oct 19 17:14:40 test-preload-360119 crio[833]: time="2025-10-19 17:14:40.581374055Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4c6a784f-2a48-454d-b791-c3da2dcbbf14 name=/runtime.v1.RuntimeService/ListContainers
	Oct 19 17:14:40 test-preload-360119 crio[833]: time="2025-10-19 17:14:40.581459818Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4c6a784f-2a48-454d-b791-c3da2dcbbf14 name=/runtime.v1.RuntimeService/ListContainers
	Oct 19 17:14:40 test-preload-360119 crio[833]: time="2025-10-19 17:14:40.582201508Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6e726f28182e8f3557a32d98cb07442012e11062c1829852c65114a7fed0dfb7,PodSandboxId:b522c653cfcd2e909dc13304c3d87c5b76a347ab72eea1a4477f810b22c3ae5c,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1760894068986879532,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-lpfq5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 87fb550d-0003-48ca-ab75-5ee6cad71963,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d179b9cd9243be2d020b6aa75d47f0cc9491d55ff7eccfeaf273389961253dc1,PodSandboxId:9dd36be70347461764a856a46ab55af7aba504e0066194bbddd9626e456dd8c9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,State:CONTAINER_RUNNING,CreatedAt:1760894065499949967,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qtqvh,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: fb36f695-65ea-4d5e-807a-cda5aca26c04,},Annotations:map[string]string{io.kubernetes.container.hash: 8f247ea6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:31bf448487965183d9f4498bba3af5f3ef5260c39368c176ab5eafed7a7b6b27,PodSandboxId:74a55d2dc6a0ecd889392799930b0e78cfce6d83155160925760c0bb648b81b9,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1760894065471017541,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5c
585391-07bd-4c38-9a28-35410c69fd35,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:75eacb29493e5ee14b22af02fb7cb9bfaeda763d021892c9946187079a57a459,PodSandboxId:b69c388be64c1ab61940fd9009dcbdfbd49b42479d24cbb4c42d3622725853ca,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,State:CONTAINER_RUNNING,CreatedAt:1760894061127672314,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-360119,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2747761a0
5af5da0634935b8cb5ba1c9,},Annotations:map[string]string{io.kubernetes.container.hash: 8c4b12d6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fcca5546861402b93182980dc280d0bc0ae41f3efbd30040ebdd3c6772054376,PodSandboxId:d56c5aa0f2c859056b98ea0ec697e096622871434afd6b5bac029625a2a13a3a,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1760894061163256790,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-360119,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 75ef1d6835e1ba6290b172cb9a650ec6,},Annotations:map
[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7dea6d15b1c9725feb919241c1d727ae580eba5a38f855fbdcb0be9a1d98cb79,PodSandboxId:cf656ca346d63ec3f38382d6b1a6b96f266a5bdad76198c2049e4389599657b3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,State:CONTAINER_RUNNING,CreatedAt:1760894061144081677,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-360119,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 18df643007720ab5be34234813377087,},Annotations:map[string]str
ing{io.kubernetes.container.hash: bf915d6a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ecfa2635201ed0bb42973429369eec553be40da43e9e4e66ec707272156feb10,PodSandboxId:69988e5d949c8791f7062968908c11fc8629c9026f0ed0a927b0d167b4999f79,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,State:CONTAINER_RUNNING,CreatedAt:1760894061106444308,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-360119,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ee926024638120593a4e4b57fee1851,},Annotation
s:map[string]string{io.kubernetes.container.hash: 99f3a73e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=4c6a784f-2a48-454d-b791-c3da2dcbbf14 name=/runtime.v1.RuntimeService/ListContainers
	Oct 19 17:14:40 test-preload-360119 crio[833]: time="2025-10-19 17:14:40.617363025Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=09a807e6-2db1-46d3-b651-cfb790bbaf8e name=/runtime.v1.RuntimeService/Version
	Oct 19 17:14:40 test-preload-360119 crio[833]: time="2025-10-19 17:14:40.617444646Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=09a807e6-2db1-46d3-b651-cfb790bbaf8e name=/runtime.v1.RuntimeService/Version
	Oct 19 17:14:40 test-preload-360119 crio[833]: time="2025-10-19 17:14:40.618374695Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=8c45445a-90d8-485e-ad37-43831b5439d7 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 19 17:14:40 test-preload-360119 crio[833]: time="2025-10-19 17:14:40.618832818Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1760894080618807740,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=8c45445a-90d8-485e-ad37-43831b5439d7 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 19 17:14:40 test-preload-360119 crio[833]: time="2025-10-19 17:14:40.619438784Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=80db9316-44bb-491b-98ad-56abe248e4ee name=/runtime.v1.RuntimeService/ListContainers
	Oct 19 17:14:40 test-preload-360119 crio[833]: time="2025-10-19 17:14:40.619490333Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=80db9316-44bb-491b-98ad-56abe248e4ee name=/runtime.v1.RuntimeService/ListContainers
	Oct 19 17:14:40 test-preload-360119 crio[833]: time="2025-10-19 17:14:40.619687690Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6e726f28182e8f3557a32d98cb07442012e11062c1829852c65114a7fed0dfb7,PodSandboxId:b522c653cfcd2e909dc13304c3d87c5b76a347ab72eea1a4477f810b22c3ae5c,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1760894068986879532,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-lpfq5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 87fb550d-0003-48ca-ab75-5ee6cad71963,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d179b9cd9243be2d020b6aa75d47f0cc9491d55ff7eccfeaf273389961253dc1,PodSandboxId:9dd36be70347461764a856a46ab55af7aba504e0066194bbddd9626e456dd8c9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,State:CONTAINER_RUNNING,CreatedAt:1760894065499949967,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qtqvh,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: fb36f695-65ea-4d5e-807a-cda5aca26c04,},Annotations:map[string]string{io.kubernetes.container.hash: 8f247ea6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:31bf448487965183d9f4498bba3af5f3ef5260c39368c176ab5eafed7a7b6b27,PodSandboxId:74a55d2dc6a0ecd889392799930b0e78cfce6d83155160925760c0bb648b81b9,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1760894065471017541,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5c
585391-07bd-4c38-9a28-35410c69fd35,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:75eacb29493e5ee14b22af02fb7cb9bfaeda763d021892c9946187079a57a459,PodSandboxId:b69c388be64c1ab61940fd9009dcbdfbd49b42479d24cbb4c42d3622725853ca,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,State:CONTAINER_RUNNING,CreatedAt:1760894061127672314,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-360119,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2747761a0
5af5da0634935b8cb5ba1c9,},Annotations:map[string]string{io.kubernetes.container.hash: 8c4b12d6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fcca5546861402b93182980dc280d0bc0ae41f3efbd30040ebdd3c6772054376,PodSandboxId:d56c5aa0f2c859056b98ea0ec697e096622871434afd6b5bac029625a2a13a3a,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1760894061163256790,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-360119,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 75ef1d6835e1ba6290b172cb9a650ec6,},Annotations:map
[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7dea6d15b1c9725feb919241c1d727ae580eba5a38f855fbdcb0be9a1d98cb79,PodSandboxId:cf656ca346d63ec3f38382d6b1a6b96f266a5bdad76198c2049e4389599657b3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,State:CONTAINER_RUNNING,CreatedAt:1760894061144081677,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-360119,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 18df643007720ab5be34234813377087,},Annotations:map[string]str
ing{io.kubernetes.container.hash: bf915d6a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ecfa2635201ed0bb42973429369eec553be40da43e9e4e66ec707272156feb10,PodSandboxId:69988e5d949c8791f7062968908c11fc8629c9026f0ed0a927b0d167b4999f79,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,State:CONTAINER_RUNNING,CreatedAt:1760894061106444308,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-360119,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ee926024638120593a4e4b57fee1851,},Annotation
s:map[string]string{io.kubernetes.container.hash: 99f3a73e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=80db9316-44bb-491b-98ad-56abe248e4ee name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	6e726f28182e8       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   11 seconds ago      Running             coredns                   1                   b522c653cfcd2       coredns-668d6bf9bc-lpfq5
	d179b9cd9243b       040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08   15 seconds ago      Running             kube-proxy                1                   9dd36be703474       kube-proxy-qtqvh
	31bf448487965       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   15 seconds ago      Running             storage-provisioner       1                   74a55d2dc6a0e       storage-provisioner
	fcca554686140       a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc   19 seconds ago      Running             etcd                      1                   d56c5aa0f2c85       etcd-test-preload-360119
	7dea6d15b1c97       c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4   19 seconds ago      Running             kube-apiserver            1                   cf656ca346d63       kube-apiserver-test-preload-360119
	75eacb29493e5       a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5   19 seconds ago      Running             kube-scheduler            1                   b69c388be64c1       kube-scheduler-test-preload-360119
	ecfa2635201ed       8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3   19 seconds ago      Running             kube-controller-manager   1                   69988e5d949c8       kube-controller-manager-test-preload-360119
	
	
	==> coredns [6e726f28182e8f3557a32d98cb07442012e11062c1829852c65114a7fed0dfb7] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:59972 - 19472 "HINFO IN 2236924544109593976.2918719905995623289. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.039835369s
	
	
	==> describe nodes <==
	Name:               test-preload-360119
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=test-preload-360119
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e20882874ea1ac33516421f13ca0f7def6fb6b34
	                    minikube.k8s.io/name=test-preload-360119
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_19T17_13_17_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 19 Oct 2025 17:13:14 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  test-preload-360119
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 19 Oct 2025 17:14:34 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 19 Oct 2025 17:14:26 +0000   Sun, 19 Oct 2025 17:13:12 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 19 Oct 2025 17:14:26 +0000   Sun, 19 Oct 2025 17:13:12 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 19 Oct 2025 17:14:26 +0000   Sun, 19 Oct 2025 17:13:12 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 19 Oct 2025 17:14:26 +0000   Sun, 19 Oct 2025 17:14:26 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.174
	  Hostname:    test-preload-360119
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3042712Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3042712Ki
	  pods:               110
	System Info:
	  Machine ID:                 cdc7f77026aa4b6eaf53c4fd14dcca90
	  System UUID:                cdc7f770-26aa-4b6e-af53-c4fd14dcca90
	  Boot ID:                    73509f1a-3320-4697-b9a3-7ff360e423a2
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.32.0
	  Kube-Proxy Version:         v1.32.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                           CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                           ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-668d6bf9bc-lpfq5                       100m (5%)     0 (0%)      70Mi (2%)        170Mi (5%)     79s
	  kube-system                 etcd-test-preload-360119                       100m (5%)     0 (0%)      100Mi (3%)       0 (0%)         83s
	  kube-system                 kube-apiserver-test-preload-360119             250m (12%)    0 (0%)      0 (0%)           0 (0%)         85s
	  kube-system                 kube-controller-manager-test-preload-360119    200m (10%)    0 (0%)      0 (0%)           0 (0%)         85s
	  kube-system                 kube-proxy-qtqvh                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         79s
	  kube-system                 kube-scheduler-test-preload-360119             100m (5%)     0 (0%)      0 (0%)           0 (0%)         85s
	  kube-system                 storage-provisioner                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         78s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (5%)  170Mi (5%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 78s                kube-proxy       
	  Normal   Starting                 15s                kube-proxy       
	  Normal   Starting                 84s                kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  84s                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  83s                kubelet          Node test-preload-360119 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    83s                kubelet          Node test-preload-360119 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     83s                kubelet          Node test-preload-360119 status is now: NodeHasSufficientPID
	  Normal   NodeReady                82s                kubelet          Node test-preload-360119 status is now: NodeReady
	  Normal   RegisteredNode           80s                node-controller  Node test-preload-360119 event: Registered Node test-preload-360119 in Controller
	  Normal   Starting                 22s                kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  21s (x8 over 21s)  kubelet          Node test-preload-360119 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    21s (x8 over 21s)  kubelet          Node test-preload-360119 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     21s (x7 over 21s)  kubelet          Node test-preload-360119 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  21s                kubelet          Updated Node Allocatable limit across pods
	  Warning  Rebooted                 16s                kubelet          Node test-preload-360119 has been rebooted, boot id: 73509f1a-3320-4697-b9a3-7ff360e423a2
	  Normal   RegisteredNode           13s                node-controller  Node test-preload-360119 event: Registered Node test-preload-360119 in Controller
	
	
	==> dmesg <==
	[Oct19 17:13] Booted with the nomodeset parameter. Only the system framebuffer will be available
	[  +0.000006] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[Oct19 17:14] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +0.008725] (rpcbind)[119]: rpcbind.service: Referenced but unset environment variable evaluates to an empty string: RPCBIND_OPTIONS
	[  +0.967281] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000017] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.082705] kauditd_printk_skb: 4 callbacks suppressed
	[  +0.099935] kauditd_printk_skb: 102 callbacks suppressed
	[  +6.457484] kauditd_printk_skb: 177 callbacks suppressed
	[  +4.451768] kauditd_printk_skb: 197 callbacks suppressed
	
	
	==> etcd [fcca5546861402b93182980dc280d0bc0ae41f3efbd30040ebdd3c6772054376] <==
	{"level":"info","ts":"2025-10-19T17:14:21.550147Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"72f328261b8d7407 switched to configuration voters=(8283008283800597511)"}
	{"level":"info","ts":"2025-10-19T17:14:21.550211Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"3f65b9220f75d9a5","local-member-id":"72f328261b8d7407","added-peer-id":"72f328261b8d7407","added-peer-peer-urls":["https://192.168.39.174:2380"]}
	{"level":"info","ts":"2025-10-19T17:14:21.550298Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"3f65b9220f75d9a5","local-member-id":"72f328261b8d7407","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-19T17:14:21.550338Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-19T17:14:21.557566Z","caller":"embed/etcd.go:729","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-10-19T17:14:21.562996Z","caller":"embed/etcd.go:600","msg":"serving peer traffic","address":"192.168.39.174:2380"}
	{"level":"info","ts":"2025-10-19T17:14:21.563031Z","caller":"embed/etcd.go:572","msg":"cmux::serve","address":"192.168.39.174:2380"}
	{"level":"info","ts":"2025-10-19T17:14:21.568614Z","caller":"embed/etcd.go:280","msg":"now serving peer/client/metrics","local-member-id":"72f328261b8d7407","initial-advertise-peer-urls":["https://192.168.39.174:2380"],"listen-peer-urls":["https://192.168.39.174:2380"],"advertise-client-urls":["https://192.168.39.174:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.174:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-10-19T17:14:21.568679Z","caller":"embed/etcd.go:871","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-10-19T17:14:23.292584Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"72f328261b8d7407 is starting a new election at term 2"}
	{"level":"info","ts":"2025-10-19T17:14:23.292632Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"72f328261b8d7407 became pre-candidate at term 2"}
	{"level":"info","ts":"2025-10-19T17:14:23.292668Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"72f328261b8d7407 received MsgPreVoteResp from 72f328261b8d7407 at term 2"}
	{"level":"info","ts":"2025-10-19T17:14:23.292681Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"72f328261b8d7407 became candidate at term 3"}
	{"level":"info","ts":"2025-10-19T17:14:23.292701Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"72f328261b8d7407 received MsgVoteResp from 72f328261b8d7407 at term 3"}
	{"level":"info","ts":"2025-10-19T17:14:23.292709Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"72f328261b8d7407 became leader at term 3"}
	{"level":"info","ts":"2025-10-19T17:14:23.292716Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 72f328261b8d7407 elected leader 72f328261b8d7407 at term 3"}
	{"level":"info","ts":"2025-10-19T17:14:23.294144Z","caller":"etcdserver/server.go:2140","msg":"published local member to cluster through raft","local-member-id":"72f328261b8d7407","local-member-attributes":"{Name:test-preload-360119 ClientURLs:[https://192.168.39.174:2379]}","request-path":"/0/members/72f328261b8d7407/attributes","cluster-id":"3f65b9220f75d9a5","publish-timeout":"7s"}
	{"level":"info","ts":"2025-10-19T17:14:23.294182Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-10-19T17:14:23.294377Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-10-19T17:14:23.294402Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-10-19T17:14:23.294471Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-10-19T17:14:23.295109Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-10-19T17:14:23.295728Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-10-19T17:14:23.295732Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.174:2379"}
	{"level":"info","ts":"2025-10-19T17:14:23.296300Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 17:14:40 up 0 min,  0 users,  load average: 0.59, 0.16, 0.05
	Linux test-preload-360119 6.6.95 #1 SMP PREEMPT_DYNAMIC Thu Oct 16 13:22:30 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [7dea6d15b1c9725feb919241c1d727ae580eba5a38f855fbdcb0be9a1d98cb79] <==
	I1019 17:14:24.496276       1 policy_source.go:240] refreshing policies
	I1019 17:14:24.497428       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1019 17:14:24.497535       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1019 17:14:24.503739       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1019 17:14:24.507039       1 shared_informer.go:320] Caches are synced for configmaps
	I1019 17:14:24.507097       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1019 17:14:24.507039       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1019 17:14:24.507290       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1019 17:14:24.511733       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I1019 17:14:24.511775       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I1019 17:14:24.511833       1 aggregator.go:171] initial CRD sync complete...
	I1019 17:14:24.511845       1 autoregister_controller.go:144] Starting autoregister controller
	I1019 17:14:24.511850       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1019 17:14:24.511855       1 cache.go:39] Caches are synced for autoregister controller
	E1019 17:14:24.517745       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1019 17:14:24.528451       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I1019 17:14:25.029920       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I1019 17:14:25.311423       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1019 17:14:26.203645       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I1019 17:14:26.235638       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I1019 17:14:26.260010       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1019 17:14:26.265906       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1019 17:14:27.686355       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1019 17:14:28.037862       1 controller.go:615] quota admission added evaluator for: endpoints
	I1019 17:14:28.089959       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [ecfa2635201ed0bb42973429369eec553be40da43e9e4e66ec707272156feb10] <==
	I1019 17:14:27.687570       1 shared_informer.go:320] Caches are synced for PV protection
	I1019 17:14:27.687580       1 shared_informer.go:320] Caches are synced for cronjob
	I1019 17:14:27.688747       1 shared_informer.go:320] Caches are synced for stateful set
	I1019 17:14:27.688782       1 shared_informer.go:320] Caches are synced for resource quota
	I1019 17:14:27.688768       1 shared_informer.go:320] Caches are synced for resource quota
	I1019 17:14:27.689160       1 shared_informer.go:320] Caches are synced for certificate-csrapproving
	I1019 17:14:27.703577       1 shared_informer.go:320] Caches are synced for garbage collector
	I1019 17:14:27.703595       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1019 17:14:27.703601       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1019 17:14:27.709873       1 shared_informer.go:320] Caches are synced for daemon sets
	I1019 17:14:27.717098       1 shared_informer.go:320] Caches are synced for crt configmap
	I1019 17:14:27.717248       1 shared_informer.go:320] Caches are synced for garbage collector
	I1019 17:14:27.719435       1 shared_informer.go:320] Caches are synced for endpoint
	I1019 17:14:27.721945       1 shared_informer.go:320] Caches are synced for job
	I1019 17:14:27.733629       1 shared_informer.go:320] Caches are synced for taint
	I1019 17:14:27.733812       1 node_lifecycle_controller.go:1234] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1019 17:14:27.733651       1 shared_informer.go:320] Caches are synced for TTL after finished
	I1019 17:14:27.733667       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I1019 17:14:27.733940       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="test-preload-360119"
	I1019 17:14:27.733972       1 node_lifecycle_controller.go:1080] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1019 17:14:28.095394       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="361.487516ms"
	I1019 17:14:28.096770       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="124.576µs"
	I1019 17:14:29.101413       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="52.116µs"
	I1019 17:14:33.291845       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="14.632539ms"
	I1019 17:14:33.292096       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="42.186µs"
	
	
	==> kube-proxy [d179b9cd9243be2d020b6aa75d47f0cc9491d55ff7eccfeaf273389961253dc1] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1019 17:14:25.731058       1 proxier.go:733] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1019 17:14:25.741000       1 server.go:698] "Successfully retrieved node IP(s)" IPs=["192.168.39.174"]
	E1019 17:14:25.741243       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1019 17:14:25.773762       1 server_linux.go:147] "No iptables support for family" ipFamily="IPv6"
	I1019 17:14:25.773826       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1019 17:14:25.773858       1 server_linux.go:170] "Using iptables Proxier"
	I1019 17:14:25.776634       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1019 17:14:25.776882       1 server.go:497] "Version info" version="v1.32.0"
	I1019 17:14:25.777035       1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1019 17:14:25.778457       1 config.go:199] "Starting service config controller"
	I1019 17:14:25.779879       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1019 17:14:25.778680       1 config.go:329] "Starting node config controller"
	I1019 17:14:25.779917       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1019 17:14:25.778935       1 config.go:105] "Starting endpoint slice config controller"
	I1019 17:14:25.779926       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1019 17:14:25.880605       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1019 17:14:25.880653       1 shared_informer.go:320] Caches are synced for service config
	I1019 17:14:25.880761       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [75eacb29493e5ee14b22af02fb7cb9bfaeda763d021892c9946187079a57a459] <==
	I1019 17:14:22.271678       1 serving.go:386] Generated self-signed cert in-memory
	W1019 17:14:24.384040       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1019 17:14:24.384126       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1019 17:14:24.384148       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1019 17:14:24.384166       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1019 17:14:24.442551       1 server.go:166] "Starting Kubernetes Scheduler" version="v1.32.0"
	I1019 17:14:24.442660       1 server.go:168] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1019 17:14:24.445420       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1019 17:14:24.445494       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1019 17:14:24.445608       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I1019 17:14:24.445683       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1019 17:14:24.545973       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Oct 19 17:14:24 test-preload-360119 kubelet[1153]: I1019 17:14:24.579468    1153 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Oct 19 17:14:24 test-preload-360119 kubelet[1153]: I1019 17:14:24.581558    1153 setters.go:602] "Node became not ready" node="test-preload-360119" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-19T17:14:24Z","lastTransitionTime":"2025-10-19T17:14:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?"}
	Oct 19 17:14:24 test-preload-360119 kubelet[1153]: E1019 17:14:24.589430    1153 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-test-preload-360119\" already exists" pod="kube-system/kube-apiserver-test-preload-360119"
	Oct 19 17:14:24 test-preload-360119 kubelet[1153]: I1019 17:14:24.589479    1153 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-test-preload-360119"
	Oct 19 17:14:24 test-preload-360119 kubelet[1153]: E1019 17:14:24.600217    1153 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-controller-manager-test-preload-360119\" already exists" pod="kube-system/kube-controller-manager-test-preload-360119"
	Oct 19 17:14:24 test-preload-360119 kubelet[1153]: I1019 17:14:24.600254    1153 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/etcd-test-preload-360119"
	Oct 19 17:14:24 test-preload-360119 kubelet[1153]: E1019 17:14:24.607481    1153 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"etcd-test-preload-360119\" already exists" pod="kube-system/etcd-test-preload-360119"
	Oct 19 17:14:24 test-preload-360119 kubelet[1153]: I1019 17:14:24.943530    1153 apiserver.go:52] "Watching apiserver"
	Oct 19 17:14:24 test-preload-360119 kubelet[1153]: E1019 17:14:24.947865    1153 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-668d6bf9bc-lpfq5" podUID="87fb550d-0003-48ca-ab75-5ee6cad71963"
	Oct 19 17:14:24 test-preload-360119 kubelet[1153]: I1019 17:14:24.958882    1153 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
	Oct 19 17:14:25 test-preload-360119 kubelet[1153]: I1019 17:14:25.027058    1153 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/fb36f695-65ea-4d5e-807a-cda5aca26c04-xtables-lock\") pod \"kube-proxy-qtqvh\" (UID: \"fb36f695-65ea-4d5e-807a-cda5aca26c04\") " pod="kube-system/kube-proxy-qtqvh"
	Oct 19 17:14:25 test-preload-360119 kubelet[1153]: I1019 17:14:25.027087    1153 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fb36f695-65ea-4d5e-807a-cda5aca26c04-lib-modules\") pod \"kube-proxy-qtqvh\" (UID: \"fb36f695-65ea-4d5e-807a-cda5aca26c04\") " pod="kube-system/kube-proxy-qtqvh"
	Oct 19 17:14:25 test-preload-360119 kubelet[1153]: I1019 17:14:25.027120    1153 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/5c585391-07bd-4c38-9a28-35410c69fd35-tmp\") pod \"storage-provisioner\" (UID: \"5c585391-07bd-4c38-9a28-35410c69fd35\") " pod="kube-system/storage-provisioner"
	Oct 19 17:14:25 test-preload-360119 kubelet[1153]: E1019 17:14:25.027399    1153 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Oct 19 17:14:25 test-preload-360119 kubelet[1153]: E1019 17:14:25.027452    1153 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/87fb550d-0003-48ca-ab75-5ee6cad71963-config-volume podName:87fb550d-0003-48ca-ab75-5ee6cad71963 nodeName:}" failed. No retries permitted until 2025-10-19 17:14:25.527430909 +0000 UTC m=+6.675336110 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/87fb550d-0003-48ca-ab75-5ee6cad71963-config-volume") pod "coredns-668d6bf9bc-lpfq5" (UID: "87fb550d-0003-48ca-ab75-5ee6cad71963") : object "kube-system"/"coredns" not registered
	Oct 19 17:14:25 test-preload-360119 kubelet[1153]: E1019 17:14:25.531088    1153 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Oct 19 17:14:25 test-preload-360119 kubelet[1153]: E1019 17:14:25.531159    1153 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/87fb550d-0003-48ca-ab75-5ee6cad71963-config-volume podName:87fb550d-0003-48ca-ab75-5ee6cad71963 nodeName:}" failed. No retries permitted until 2025-10-19 17:14:26.531145714 +0000 UTC m=+7.679050928 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/87fb550d-0003-48ca-ab75-5ee6cad71963-config-volume") pod "coredns-668d6bf9bc-lpfq5" (UID: "87fb550d-0003-48ca-ab75-5ee6cad71963") : object "kube-system"/"coredns" not registered
	Oct 19 17:14:26 test-preload-360119 kubelet[1153]: I1019 17:14:26.363496    1153 kubelet_node_status.go:502] "Fast updating node status as it just became ready"
	Oct 19 17:14:26 test-preload-360119 kubelet[1153]: E1019 17:14:26.536014    1153 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Oct 19 17:14:26 test-preload-360119 kubelet[1153]: E1019 17:14:26.536084    1153 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/87fb550d-0003-48ca-ab75-5ee6cad71963-config-volume podName:87fb550d-0003-48ca-ab75-5ee6cad71963 nodeName:}" failed. No retries permitted until 2025-10-19 17:14:28.536071273 +0000 UTC m=+9.683976487 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/87fb550d-0003-48ca-ab75-5ee6cad71963-config-volume") pod "coredns-668d6bf9bc-lpfq5" (UID: "87fb550d-0003-48ca-ab75-5ee6cad71963") : object "kube-system"/"coredns" not registered
	Oct 19 17:14:29 test-preload-360119 kubelet[1153]: E1019 17:14:29.024408    1153 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1760894069023154675,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 19 17:14:29 test-preload-360119 kubelet[1153]: E1019 17:14:29.024857    1153 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1760894069023154675,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 19 17:14:33 test-preload-360119 kubelet[1153]: I1019 17:14:33.262800    1153 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Oct 19 17:14:39 test-preload-360119 kubelet[1153]: E1019 17:14:39.026066    1153 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1760894079025833628,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 19 17:14:39 test-preload-360119 kubelet[1153]: E1019 17:14:39.026085    1153 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1760894079025833628,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [31bf448487965183d9f4498bba3af5f3ef5260c39368c176ab5eafed7a7b6b27] <==
	I1019 17:14:25.656206       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p test-preload-360119 -n test-preload-360119
helpers_test.go:269: (dbg) Run:  kubectl --context test-preload-360119 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestPreload FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:175: Cleaning up "test-preload-360119" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-360119
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-360119: (1.00993817s)
--- FAIL: TestPreload (137.15s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (58.73s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-046984 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-046984 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (54.283303761s)
pause_test.go:100: expected the second start log output to include "The running cluster does not require reconfiguration" but got: 
-- stdout --
	* [pause-046984] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21683
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21683-274250/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21683-274250/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	* Starting "pause-046984" primary control-plane node in "pause-046984" cluster
	* Preparing Kubernetes v1.34.1 on CRI-O 1.29.1 ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	* Enabled addons: 
	* Done! kubectl is now configured to use "pause-046984" cluster and "default" namespace by default

                                                
                                                
-- /stdout --
** stderr ** 
	I1019 17:17:57.080147  311731 out.go:360] Setting OutFile to fd 1 ...
	I1019 17:17:57.080256  311731 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 17:17:57.080261  311731 out.go:374] Setting ErrFile to fd 2...
	I1019 17:17:57.080265  311731 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 17:17:57.080491  311731 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21683-274250/.minikube/bin
	I1019 17:17:57.081028  311731 out.go:368] Setting JSON to false
	I1019 17:17:57.082132  311731 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":10819,"bootTime":1760883458,"procs":201,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1019 17:17:57.083042  311731 start.go:143] virtualization: kvm guest
	I1019 17:17:57.084909  311731 out.go:179] * [pause-046984] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1019 17:17:57.086217  311731 out.go:179]   - MINIKUBE_LOCATION=21683
	I1019 17:17:57.086235  311731 notify.go:221] Checking for updates...
	I1019 17:17:57.087261  311731 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1019 17:17:57.088805  311731 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21683-274250/kubeconfig
	I1019 17:17:57.089928  311731 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21683-274250/.minikube
	I1019 17:17:57.091032  311731 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1019 17:17:57.091914  311731 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1019 17:17:57.093367  311731 config.go:182] Loaded profile config "pause-046984": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 17:17:57.093863  311731 main.go:143] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1019 17:17:57.093913  311731 main.go:143] libmachine: Launching plugin server for driver kvm2
	I1019 17:17:57.109869  311731 main.go:143] libmachine: Plugin server listening at address 127.0.0.1:46357
	I1019 17:17:57.110437  311731 main.go:143] libmachine: () Calling .GetVersion
	I1019 17:17:57.111032  311731 main.go:143] libmachine: Using API Version  1
	I1019 17:17:57.111101  311731 main.go:143] libmachine: () Calling .SetConfigRaw
	I1019 17:17:57.111572  311731 main.go:143] libmachine: () Calling .GetMachineName
	I1019 17:17:57.111881  311731 main.go:143] libmachine: (pause-046984) Calling .DriverName
	I1019 17:17:57.112297  311731 driver.go:422] Setting default libvirt URI to qemu:///system
	I1019 17:17:57.112604  311731 main.go:143] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1019 17:17:57.112674  311731 main.go:143] libmachine: Launching plugin server for driver kvm2
	I1019 17:17:57.127060  311731 main.go:143] libmachine: Plugin server listening at address 127.0.0.1:44545
	I1019 17:17:57.127623  311731 main.go:143] libmachine: () Calling .GetVersion
	I1019 17:17:57.128260  311731 main.go:143] libmachine: Using API Version  1
	I1019 17:17:57.128289  311731 main.go:143] libmachine: () Calling .SetConfigRaw
	I1019 17:17:57.128833  311731 main.go:143] libmachine: () Calling .GetMachineName
	I1019 17:17:57.129048  311731 main.go:143] libmachine: (pause-046984) Calling .DriverName
	I1019 17:17:57.165292  311731 out.go:179] * Using the kvm2 driver based on existing profile
	I1019 17:17:57.166195  311731 start.go:309] selected driver: kvm2
	I1019 17:17:57.166211  311731 start.go:930] validating driver "kvm2" against &{Name:pause-046984 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernetes
Version:v1.34.1 ClusterName:pause-046984 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.42 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-instal
ler:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1019 17:17:57.166422  311731 start.go:941] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1019 17:17:57.166772  311731 install.go:66] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1019 17:17:57.166855  311731 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21683-274250/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1019 17:17:57.181996  311731 install.go:163] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I1019 17:17:57.182033  311731 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21683-274250/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1019 17:17:57.196872  311731 install.go:163] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I1019 17:17:57.197776  311731 cni.go:84] Creating CNI manager for ""
	I1019 17:17:57.197836  311731 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1019 17:17:57.197882  311731 start.go:353] cluster config:
	{Name:pause-046984 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-046984 Namespace:default APIServerHAVIP: API
ServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.42 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false p
ortainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1019 17:17:57.198066  311731 iso.go:125] acquiring lock: {Name:mk7c0069e2cf0a68d4955dec96c59ff341a488dd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1019 17:17:57.199776  311731 out.go:179] * Starting "pause-046984" primary control-plane node in "pause-046984" cluster
	I1019 17:17:57.200807  311731 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1019 17:17:57.200858  311731 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21683-274250/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1019 17:17:57.200872  311731 cache.go:59] Caching tarball of preloaded images
	I1019 17:17:57.200952  311731 preload.go:233] Found /home/jenkins/minikube-integration/21683-274250/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1019 17:17:57.200967  311731 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1019 17:17:57.201179  311731 profile.go:143] Saving config to /home/jenkins/minikube-integration/21683-274250/.minikube/profiles/pause-046984/config.json ...
	I1019 17:17:57.201433  311731 start.go:360] acquireMachinesLock for pause-046984: {Name:mk3b19946e20646ec6cf08c56ebb92a1f48fa1bb Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1019 17:18:08.263439  311731 start.go:364] duration metric: took 11.061977139s to acquireMachinesLock for "pause-046984"
	I1019 17:18:08.263488  311731 start.go:96] Skipping create...Using existing machine configuration
	I1019 17:18:08.263498  311731 fix.go:54] fixHost starting: 
	I1019 17:18:08.263928  311731 main.go:143] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1019 17:18:08.263999  311731 main.go:143] libmachine: Launching plugin server for driver kvm2
	I1019 17:18:08.280838  311731 main.go:143] libmachine: Plugin server listening at address 127.0.0.1:36499
	I1019 17:18:08.281354  311731 main.go:143] libmachine: () Calling .GetVersion
	I1019 17:18:08.281927  311731 main.go:143] libmachine: Using API Version  1
	I1019 17:18:08.281957  311731 main.go:143] libmachine: () Calling .SetConfigRaw
	I1019 17:18:08.282307  311731 main.go:143] libmachine: () Calling .GetMachineName
	I1019 17:18:08.282526  311731 main.go:143] libmachine: (pause-046984) Calling .DriverName
	I1019 17:18:08.282686  311731 main.go:143] libmachine: (pause-046984) Calling .GetState
	I1019 17:18:08.284660  311731 fix.go:112] recreateIfNeeded on pause-046984: state=Running err=<nil>
	W1019 17:18:08.284680  311731 fix.go:138] unexpected machine state, will restart: <nil>
	I1019 17:18:08.287067  311731 out.go:252] * Updating the running kvm2 "pause-046984" VM ...
	I1019 17:18:08.287098  311731 machine.go:94] provisionDockerMachine start ...
	I1019 17:18:08.287112  311731 main.go:143] libmachine: (pause-046984) Calling .DriverName
	I1019 17:18:08.287324  311731 main.go:143] libmachine: (pause-046984) Calling .GetSSHHostname
	I1019 17:18:08.290108  311731 main.go:143] libmachine: (pause-046984) DBG | domain pause-046984 has defined MAC address 52:54:00:39:67:94 in network mk-pause-046984
	I1019 17:18:08.290669  311731 main.go:143] libmachine: (pause-046984) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:67:94", ip: ""} in network mk-pause-046984: {Iface:virbr1 ExpiryTime:2025-10-19 18:16:44 +0000 UTC Type:0 Mac:52:54:00:39:67:94 Iaid: IPaddr:192.168.39.42 Prefix:24 Hostname:pause-046984 Clientid:01:52:54:00:39:67:94}
	I1019 17:18:08.290709  311731 main.go:143] libmachine: (pause-046984) DBG | domain pause-046984 has defined IP address 192.168.39.42 and MAC address 52:54:00:39:67:94 in network mk-pause-046984
	I1019 17:18:08.290918  311731 main.go:143] libmachine: (pause-046984) Calling .GetSSHPort
	I1019 17:18:08.291103  311731 main.go:143] libmachine: (pause-046984) Calling .GetSSHKeyPath
	I1019 17:18:08.291297  311731 main.go:143] libmachine: (pause-046984) Calling .GetSSHKeyPath
	I1019 17:18:08.291453  311731 main.go:143] libmachine: (pause-046984) Calling .GetSSHUsername
	I1019 17:18:08.291633  311731 main.go:143] libmachine: Using SSH client type: native
	I1019 17:18:08.292004  311731 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.42 22 <nil> <nil>}
	I1019 17:18:08.292024  311731 main.go:143] libmachine: About to run SSH command:
	hostname
	I1019 17:18:08.397085  311731 main.go:143] libmachine: SSH cmd err, output: <nil>: pause-046984
	
	I1019 17:18:08.397115  311731 main.go:143] libmachine: (pause-046984) Calling .GetMachineName
	I1019 17:18:08.397368  311731 buildroot.go:166] provisioning hostname "pause-046984"
	I1019 17:18:08.397401  311731 main.go:143] libmachine: (pause-046984) Calling .GetMachineName
	I1019 17:18:08.397615  311731 main.go:143] libmachine: (pause-046984) Calling .GetSSHHostname
	I1019 17:18:08.401005  311731 main.go:143] libmachine: (pause-046984) DBG | domain pause-046984 has defined MAC address 52:54:00:39:67:94 in network mk-pause-046984
	I1019 17:18:08.401515  311731 main.go:143] libmachine: (pause-046984) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:67:94", ip: ""} in network mk-pause-046984: {Iface:virbr1 ExpiryTime:2025-10-19 18:16:44 +0000 UTC Type:0 Mac:52:54:00:39:67:94 Iaid: IPaddr:192.168.39.42 Prefix:24 Hostname:pause-046984 Clientid:01:52:54:00:39:67:94}
	I1019 17:18:08.401541  311731 main.go:143] libmachine: (pause-046984) DBG | domain pause-046984 has defined IP address 192.168.39.42 and MAC address 52:54:00:39:67:94 in network mk-pause-046984
	I1019 17:18:08.401757  311731 main.go:143] libmachine: (pause-046984) Calling .GetSSHPort
	I1019 17:18:08.401947  311731 main.go:143] libmachine: (pause-046984) Calling .GetSSHKeyPath
	I1019 17:18:08.402144  311731 main.go:143] libmachine: (pause-046984) Calling .GetSSHKeyPath
	I1019 17:18:08.402373  311731 main.go:143] libmachine: (pause-046984) Calling .GetSSHUsername
	I1019 17:18:08.402616  311731 main.go:143] libmachine: Using SSH client type: native
	I1019 17:18:08.402939  311731 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.42 22 <nil> <nil>}
	I1019 17:18:08.402963  311731 main.go:143] libmachine: About to run SSH command:
	sudo hostname pause-046984 && echo "pause-046984" | sudo tee /etc/hostname
	I1019 17:18:08.533760  311731 main.go:143] libmachine: SSH cmd err, output: <nil>: pause-046984
	
	I1019 17:18:08.533788  311731 main.go:143] libmachine: (pause-046984) Calling .GetSSHHostname
	I1019 17:18:08.537233  311731 main.go:143] libmachine: (pause-046984) DBG | domain pause-046984 has defined MAC address 52:54:00:39:67:94 in network mk-pause-046984
	I1019 17:18:08.537655  311731 main.go:143] libmachine: (pause-046984) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:67:94", ip: ""} in network mk-pause-046984: {Iface:virbr1 ExpiryTime:2025-10-19 18:16:44 +0000 UTC Type:0 Mac:52:54:00:39:67:94 Iaid: IPaddr:192.168.39.42 Prefix:24 Hostname:pause-046984 Clientid:01:52:54:00:39:67:94}
	I1019 17:18:08.537696  311731 main.go:143] libmachine: (pause-046984) DBG | domain pause-046984 has defined IP address 192.168.39.42 and MAC address 52:54:00:39:67:94 in network mk-pause-046984
	I1019 17:18:08.537890  311731 main.go:143] libmachine: (pause-046984) Calling .GetSSHPort
	I1019 17:18:08.538092  311731 main.go:143] libmachine: (pause-046984) Calling .GetSSHKeyPath
	I1019 17:18:08.538297  311731 main.go:143] libmachine: (pause-046984) Calling .GetSSHKeyPath
	I1019 17:18:08.538462  311731 main.go:143] libmachine: (pause-046984) Calling .GetSSHUsername
	I1019 17:18:08.538639  311731 main.go:143] libmachine: Using SSH client type: native
	I1019 17:18:08.538864  311731 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.42 22 <nil> <nil>}
	I1019 17:18:08.538887  311731 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-046984' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-046984/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-046984' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1019 17:18:08.644552  311731 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1019 17:18:08.644591  311731 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21683-274250/.minikube CaCertPath:/home/jenkins/minikube-integration/21683-274250/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21683-274250/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21683-274250/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21683-274250/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21683-274250/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21683-274250/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21683-274250/.minikube}
	I1019 17:18:08.644615  311731 buildroot.go:174] setting up certificates
	I1019 17:18:08.644634  311731 provision.go:84] configureAuth start
	I1019 17:18:08.644654  311731 main.go:143] libmachine: (pause-046984) Calling .GetMachineName
	I1019 17:18:08.645002  311731 main.go:143] libmachine: (pause-046984) Calling .GetIP
	I1019 17:18:08.649334  311731 main.go:143] libmachine: (pause-046984) DBG | domain pause-046984 has defined MAC address 52:54:00:39:67:94 in network mk-pause-046984
	I1019 17:18:08.649853  311731 main.go:143] libmachine: (pause-046984) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:67:94", ip: ""} in network mk-pause-046984: {Iface:virbr1 ExpiryTime:2025-10-19 18:16:44 +0000 UTC Type:0 Mac:52:54:00:39:67:94 Iaid: IPaddr:192.168.39.42 Prefix:24 Hostname:pause-046984 Clientid:01:52:54:00:39:67:94}
	I1019 17:18:08.649884  311731 main.go:143] libmachine: (pause-046984) DBG | domain pause-046984 has defined IP address 192.168.39.42 and MAC address 52:54:00:39:67:94 in network mk-pause-046984
	I1019 17:18:08.650131  311731 main.go:143] libmachine: (pause-046984) Calling .GetSSHHostname
	I1019 17:18:08.652373  311731 main.go:143] libmachine: (pause-046984) DBG | domain pause-046984 has defined MAC address 52:54:00:39:67:94 in network mk-pause-046984
	I1019 17:18:08.652844  311731 main.go:143] libmachine: (pause-046984) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:67:94", ip: ""} in network mk-pause-046984: {Iface:virbr1 ExpiryTime:2025-10-19 18:16:44 +0000 UTC Type:0 Mac:52:54:00:39:67:94 Iaid: IPaddr:192.168.39.42 Prefix:24 Hostname:pause-046984 Clientid:01:52:54:00:39:67:94}
	I1019 17:18:08.652873  311731 main.go:143] libmachine: (pause-046984) DBG | domain pause-046984 has defined IP address 192.168.39.42 and MAC address 52:54:00:39:67:94 in network mk-pause-046984
	I1019 17:18:08.653128  311731 provision.go:143] copyHostCerts
	I1019 17:18:08.653229  311731 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-274250/.minikube/ca.pem, removing ...
	I1019 17:18:08.653249  311731 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-274250/.minikube/ca.pem
	I1019 17:18:08.653325  311731 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-274250/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21683-274250/.minikube/ca.pem (1082 bytes)
	I1019 17:18:08.653513  311731 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-274250/.minikube/cert.pem, removing ...
	I1019 17:18:08.653528  311731 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-274250/.minikube/cert.pem
	I1019 17:18:08.653573  311731 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-274250/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21683-274250/.minikube/cert.pem (1123 bytes)
	I1019 17:18:08.653688  311731 exec_runner.go:144] found /home/jenkins/minikube-integration/21683-274250/.minikube/key.pem, removing ...
	I1019 17:18:08.653700  311731 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21683-274250/.minikube/key.pem
	I1019 17:18:08.653728  311731 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21683-274250/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21683-274250/.minikube/key.pem (1675 bytes)
	I1019 17:18:08.653811  311731 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21683-274250/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21683-274250/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21683-274250/.minikube/certs/ca-key.pem org=jenkins.pause-046984 san=[127.0.0.1 192.168.39.42 localhost minikube pause-046984]
	I1019 17:18:09.379376  311731 provision.go:177] copyRemoteCerts
	I1019 17:18:09.379428  311731 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1019 17:18:09.379455  311731 main.go:143] libmachine: (pause-046984) Calling .GetSSHHostname
	I1019 17:18:09.382616  311731 main.go:143] libmachine: (pause-046984) DBG | domain pause-046984 has defined MAC address 52:54:00:39:67:94 in network mk-pause-046984
	I1019 17:18:09.383111  311731 main.go:143] libmachine: (pause-046984) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:67:94", ip: ""} in network mk-pause-046984: {Iface:virbr1 ExpiryTime:2025-10-19 18:16:44 +0000 UTC Type:0 Mac:52:54:00:39:67:94 Iaid: IPaddr:192.168.39.42 Prefix:24 Hostname:pause-046984 Clientid:01:52:54:00:39:67:94}
	I1019 17:18:09.383149  311731 main.go:143] libmachine: (pause-046984) DBG | domain pause-046984 has defined IP address 192.168.39.42 and MAC address 52:54:00:39:67:94 in network mk-pause-046984
	I1019 17:18:09.383430  311731 main.go:143] libmachine: (pause-046984) Calling .GetSSHPort
	I1019 17:18:09.383640  311731 main.go:143] libmachine: (pause-046984) Calling .GetSSHKeyPath
	I1019 17:18:09.383811  311731 main.go:143] libmachine: (pause-046984) Calling .GetSSHUsername
	I1019 17:18:09.383976  311731 sshutil.go:53] new ssh client: &{IP:192.168.39.42 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21683-274250/.minikube/machines/pause-046984/id_rsa Username:docker}
	I1019 17:18:09.466730  311731 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-274250/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1019 17:18:09.502485  311731 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-274250/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I1019 17:18:09.534535  311731 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-274250/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1019 17:18:09.570845  311731 provision.go:87] duration metric: took 926.178263ms to configureAuth
	I1019 17:18:09.570881  311731 buildroot.go:189] setting minikube options for container-runtime
	I1019 17:18:09.571148  311731 config.go:182] Loaded profile config "pause-046984": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 17:18:09.571255  311731 main.go:143] libmachine: (pause-046984) Calling .GetSSHHostname
	I1019 17:18:09.574438  311731 main.go:143] libmachine: (pause-046984) DBG | domain pause-046984 has defined MAC address 52:54:00:39:67:94 in network mk-pause-046984
	I1019 17:18:09.574882  311731 main.go:143] libmachine: (pause-046984) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:67:94", ip: ""} in network mk-pause-046984: {Iface:virbr1 ExpiryTime:2025-10-19 18:16:44 +0000 UTC Type:0 Mac:52:54:00:39:67:94 Iaid: IPaddr:192.168.39.42 Prefix:24 Hostname:pause-046984 Clientid:01:52:54:00:39:67:94}
	I1019 17:18:09.574915  311731 main.go:143] libmachine: (pause-046984) DBG | domain pause-046984 has defined IP address 192.168.39.42 and MAC address 52:54:00:39:67:94 in network mk-pause-046984
	I1019 17:18:09.575136  311731 main.go:143] libmachine: (pause-046984) Calling .GetSSHPort
	I1019 17:18:09.575353  311731 main.go:143] libmachine: (pause-046984) Calling .GetSSHKeyPath
	I1019 17:18:09.575560  311731 main.go:143] libmachine: (pause-046984) Calling .GetSSHKeyPath
	I1019 17:18:09.575756  311731 main.go:143] libmachine: (pause-046984) Calling .GetSSHUsername
	I1019 17:18:09.575961  311731 main.go:143] libmachine: Using SSH client type: native
	I1019 17:18:09.576265  311731 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.42 22 <nil> <nil>}
	I1019 17:18:09.576291  311731 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1019 17:18:15.993489  311731 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1019 17:18:15.993522  311731 machine.go:97] duration metric: took 7.706414805s to provisionDockerMachine
	I1019 17:18:15.993536  311731 start.go:293] postStartSetup for "pause-046984" (driver="kvm2")
	I1019 17:18:15.993549  311731 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1019 17:18:15.993571  311731 main.go:143] libmachine: (pause-046984) Calling .DriverName
	I1019 17:18:15.994010  311731 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1019 17:18:15.994059  311731 main.go:143] libmachine: (pause-046984) Calling .GetSSHHostname
	I1019 17:18:15.997947  311731 main.go:143] libmachine: (pause-046984) DBG | domain pause-046984 has defined MAC address 52:54:00:39:67:94 in network mk-pause-046984
	I1019 17:18:15.998518  311731 main.go:143] libmachine: (pause-046984) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:67:94", ip: ""} in network mk-pause-046984: {Iface:virbr1 ExpiryTime:2025-10-19 18:16:44 +0000 UTC Type:0 Mac:52:54:00:39:67:94 Iaid: IPaddr:192.168.39.42 Prefix:24 Hostname:pause-046984 Clientid:01:52:54:00:39:67:94}
	I1019 17:18:15.998546  311731 main.go:143] libmachine: (pause-046984) DBG | domain pause-046984 has defined IP address 192.168.39.42 and MAC address 52:54:00:39:67:94 in network mk-pause-046984
	I1019 17:18:15.998750  311731 main.go:143] libmachine: (pause-046984) Calling .GetSSHPort
	I1019 17:18:15.998947  311731 main.go:143] libmachine: (pause-046984) Calling .GetSSHKeyPath
	I1019 17:18:15.999177  311731 main.go:143] libmachine: (pause-046984) Calling .GetSSHUsername
	I1019 17:18:15.999372  311731 sshutil.go:53] new ssh client: &{IP:192.168.39.42 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21683-274250/.minikube/machines/pause-046984/id_rsa Username:docker}
	I1019 17:18:16.084559  311731 ssh_runner.go:195] Run: cat /etc/os-release
	I1019 17:18:16.091001  311731 info.go:137] Remote host: Buildroot 2025.02
	I1019 17:18:16.091034  311731 filesync.go:126] Scanning /home/jenkins/minikube-integration/21683-274250/.minikube/addons for local assets ...
	I1019 17:18:16.091126  311731 filesync.go:126] Scanning /home/jenkins/minikube-integration/21683-274250/.minikube/files for local assets ...
	I1019 17:18:16.091333  311731 filesync.go:149] local asset: /home/jenkins/minikube-integration/21683-274250/.minikube/files/etc/ssl/certs/2782802.pem -> 2782802.pem in /etc/ssl/certs
	I1019 17:18:16.091507  311731 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1019 17:18:16.106399  311731 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-274250/.minikube/files/etc/ssl/certs/2782802.pem --> /etc/ssl/certs/2782802.pem (1708 bytes)
	I1019 17:18:16.147636  311731 start.go:296] duration metric: took 154.080083ms for postStartSetup
	I1019 17:18:16.147686  311731 fix.go:56] duration metric: took 7.884187111s for fixHost
	I1019 17:18:16.147715  311731 main.go:143] libmachine: (pause-046984) Calling .GetSSHHostname
	I1019 17:18:16.151335  311731 main.go:143] libmachine: (pause-046984) DBG | domain pause-046984 has defined MAC address 52:54:00:39:67:94 in network mk-pause-046984
	I1019 17:18:16.151760  311731 main.go:143] libmachine: (pause-046984) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:67:94", ip: ""} in network mk-pause-046984: {Iface:virbr1 ExpiryTime:2025-10-19 18:16:44 +0000 UTC Type:0 Mac:52:54:00:39:67:94 Iaid: IPaddr:192.168.39.42 Prefix:24 Hostname:pause-046984 Clientid:01:52:54:00:39:67:94}
	I1019 17:18:16.151793  311731 main.go:143] libmachine: (pause-046984) DBG | domain pause-046984 has defined IP address 192.168.39.42 and MAC address 52:54:00:39:67:94 in network mk-pause-046984
	I1019 17:18:16.152087  311731 main.go:143] libmachine: (pause-046984) Calling .GetSSHPort
	I1019 17:18:16.152304  311731 main.go:143] libmachine: (pause-046984) Calling .GetSSHKeyPath
	I1019 17:18:16.152527  311731 main.go:143] libmachine: (pause-046984) Calling .GetSSHKeyPath
	I1019 17:18:16.152697  311731 main.go:143] libmachine: (pause-046984) Calling .GetSSHUsername
	I1019 17:18:16.152904  311731 main.go:143] libmachine: Using SSH client type: native
	I1019 17:18:16.153178  311731 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 192.168.39.42 22 <nil> <nil>}
	I1019 17:18:16.153194  311731 main.go:143] libmachine: About to run SSH command:
	date +%s.%N
	I1019 17:18:16.270400  311731 main.go:143] libmachine: SSH cmd err, output: <nil>: 1760894296.266310678
	
	I1019 17:18:16.270429  311731 fix.go:216] guest clock: 1760894296.266310678
	I1019 17:18:16.270441  311731 fix.go:229] Guest: 2025-10-19 17:18:16.266310678 +0000 UTC Remote: 2025-10-19 17:18:16.147691954 +0000 UTC m=+19.108120021 (delta=118.618724ms)
	I1019 17:18:16.270496  311731 fix.go:200] guest clock delta is within tolerance: 118.618724ms
	I1019 17:18:16.270509  311731 start.go:83] releasing machines lock for "pause-046984", held for 8.007042068s
	I1019 17:18:16.270547  311731 main.go:143] libmachine: (pause-046984) Calling .DriverName
	I1019 17:18:16.270873  311731 main.go:143] libmachine: (pause-046984) Calling .GetIP
	I1019 17:18:16.274619  311731 main.go:143] libmachine: (pause-046984) DBG | domain pause-046984 has defined MAC address 52:54:00:39:67:94 in network mk-pause-046984
	I1019 17:18:16.275125  311731 main.go:143] libmachine: (pause-046984) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:67:94", ip: ""} in network mk-pause-046984: {Iface:virbr1 ExpiryTime:2025-10-19 18:16:44 +0000 UTC Type:0 Mac:52:54:00:39:67:94 Iaid: IPaddr:192.168.39.42 Prefix:24 Hostname:pause-046984 Clientid:01:52:54:00:39:67:94}
	I1019 17:18:16.275154  311731 main.go:143] libmachine: (pause-046984) DBG | domain pause-046984 has defined IP address 192.168.39.42 and MAC address 52:54:00:39:67:94 in network mk-pause-046984
	I1019 17:18:16.275382  311731 main.go:143] libmachine: (pause-046984) Calling .DriverName
	I1019 17:18:16.276075  311731 main.go:143] libmachine: (pause-046984) Calling .DriverName
	I1019 17:18:16.276295  311731 main.go:143] libmachine: (pause-046984) Calling .DriverName
	I1019 17:18:16.276441  311731 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1019 17:18:16.276497  311731 ssh_runner.go:195] Run: cat /version.json
	I1019 17:18:16.276504  311731 main.go:143] libmachine: (pause-046984) Calling .GetSSHHostname
	I1019 17:18:16.276526  311731 main.go:143] libmachine: (pause-046984) Calling .GetSSHHostname
	I1019 17:18:16.280027  311731 main.go:143] libmachine: (pause-046984) DBG | domain pause-046984 has defined MAC address 52:54:00:39:67:94 in network mk-pause-046984
	I1019 17:18:16.280158  311731 main.go:143] libmachine: (pause-046984) DBG | domain pause-046984 has defined MAC address 52:54:00:39:67:94 in network mk-pause-046984
	I1019 17:18:16.280724  311731 main.go:143] libmachine: (pause-046984) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:67:94", ip: ""} in network mk-pause-046984: {Iface:virbr1 ExpiryTime:2025-10-19 18:16:44 +0000 UTC Type:0 Mac:52:54:00:39:67:94 Iaid: IPaddr:192.168.39.42 Prefix:24 Hostname:pause-046984 Clientid:01:52:54:00:39:67:94}
	I1019 17:18:16.280769  311731 main.go:143] libmachine: (pause-046984) DBG | domain pause-046984 has defined IP address 192.168.39.42 and MAC address 52:54:00:39:67:94 in network mk-pause-046984
	I1019 17:18:16.280798  311731 main.go:143] libmachine: (pause-046984) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:67:94", ip: ""} in network mk-pause-046984: {Iface:virbr1 ExpiryTime:2025-10-19 18:16:44 +0000 UTC Type:0 Mac:52:54:00:39:67:94 Iaid: IPaddr:192.168.39.42 Prefix:24 Hostname:pause-046984 Clientid:01:52:54:00:39:67:94}
	I1019 17:18:16.280822  311731 main.go:143] libmachine: (pause-046984) DBG | domain pause-046984 has defined IP address 192.168.39.42 and MAC address 52:54:00:39:67:94 in network mk-pause-046984
	I1019 17:18:16.281102  311731 main.go:143] libmachine: (pause-046984) Calling .GetSSHPort
	I1019 17:18:16.281277  311731 main.go:143] libmachine: (pause-046984) Calling .GetSSHPort
	I1019 17:18:16.281375  311731 main.go:143] libmachine: (pause-046984) Calling .GetSSHKeyPath
	I1019 17:18:16.281488  311731 main.go:143] libmachine: (pause-046984) Calling .GetSSHKeyPath
	I1019 17:18:16.281497  311731 main.go:143] libmachine: (pause-046984) Calling .GetSSHUsername
	I1019 17:18:16.281633  311731 main.go:143] libmachine: (pause-046984) Calling .GetSSHUsername
	I1019 17:18:16.281664  311731 sshutil.go:53] new ssh client: &{IP:192.168.39.42 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21683-274250/.minikube/machines/pause-046984/id_rsa Username:docker}
	I1019 17:18:16.281798  311731 sshutil.go:53] new ssh client: &{IP:192.168.39.42 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21683-274250/.minikube/machines/pause-046984/id_rsa Username:docker}
	I1019 17:18:16.365693  311731 ssh_runner.go:195] Run: systemctl --version
	I1019 17:18:16.395241  311731 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1019 17:18:16.553833  311731 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1019 17:18:16.562872  311731 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1019 17:18:16.562967  311731 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1019 17:18:16.575141  311731 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1019 17:18:16.575164  311731 start.go:496] detecting cgroup driver to use...
	I1019 17:18:16.575232  311731 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1019 17:18:16.597751  311731 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1019 17:18:16.616971  311731 docker.go:218] disabling cri-docker service (if available) ...
	I1019 17:18:16.617060  311731 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1019 17:18:16.639632  311731 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1019 17:18:16.656334  311731 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1019 17:18:17.074246  311731 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1019 17:18:17.417373  311731 docker.go:234] disabling docker service ...
	I1019 17:18:17.417466  311731 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1019 17:18:17.467411  311731 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1019 17:18:17.511487  311731 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1019 17:18:17.931655  311731 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1019 17:18:18.218853  311731 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1019 17:18:18.243644  311731 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1019 17:18:18.319080  311731 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1019 17:18:18.319172  311731 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 17:18:18.340502  311731 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1019 17:18:18.340585  311731 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 17:18:18.379449  311731 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 17:18:18.406437  311731 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 17:18:18.431602  311731 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1019 17:18:18.473801  311731 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 17:18:18.507570  311731 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 17:18:18.539231  311731 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1019 17:18:18.568886  311731 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1019 17:18:18.596940  311731 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1019 17:18:18.617501  311731 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1019 17:18:18.945089  311731 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1019 17:18:29.268044  311731 ssh_runner.go:235] Completed: sudo systemctl restart crio: (10.32290994s)
	I1019 17:18:29.268075  311731 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1019 17:18:29.268119  311731 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1019 17:18:29.275867  311731 start.go:564] Will wait 60s for crictl version
	I1019 17:18:29.275939  311731 ssh_runner.go:195] Run: which crictl
	I1019 17:18:29.280780  311731 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1019 17:18:29.319590  311731 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1019 17:18:29.319682  311731 ssh_runner.go:195] Run: crio --version
	I1019 17:18:29.350110  311731 ssh_runner.go:195] Run: crio --version
	I1019 17:18:29.383000  311731 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.29.1 ...
	I1019 17:18:29.384019  311731 main.go:143] libmachine: (pause-046984) Calling .GetIP
	I1019 17:18:29.387666  311731 main.go:143] libmachine: (pause-046984) DBG | domain pause-046984 has defined MAC address 52:54:00:39:67:94 in network mk-pause-046984
	I1019 17:18:29.388320  311731 main.go:143] libmachine: (pause-046984) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:67:94", ip: ""} in network mk-pause-046984: {Iface:virbr1 ExpiryTime:2025-10-19 18:16:44 +0000 UTC Type:0 Mac:52:54:00:39:67:94 Iaid: IPaddr:192.168.39.42 Prefix:24 Hostname:pause-046984 Clientid:01:52:54:00:39:67:94}
	I1019 17:18:29.388342  311731 main.go:143] libmachine: (pause-046984) DBG | domain pause-046984 has defined IP address 192.168.39.42 and MAC address 52:54:00:39:67:94 in network mk-pause-046984
	I1019 17:18:29.388595  311731 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1019 17:18:29.393949  311731 kubeadm.go:884] updating cluster {Name:pause-046984 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1
ClusterName:pause-046984 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.42 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidi
a-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1019 17:18:29.394114  311731 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1019 17:18:29.394215  311731 ssh_runner.go:195] Run: sudo crictl images --output json
	I1019 17:18:29.454275  311731 crio.go:514] all images are preloaded for cri-o runtime.
	I1019 17:18:29.454301  311731 crio.go:433] Images already preloaded, skipping extraction
	I1019 17:18:29.454361  311731 ssh_runner.go:195] Run: sudo crictl images --output json
	I1019 17:18:29.497424  311731 crio.go:514] all images are preloaded for cri-o runtime.
	I1019 17:18:29.497445  311731 cache_images.go:86] Images are preloaded, skipping loading
	I1019 17:18:29.497458  311731 kubeadm.go:935] updating node { 192.168.39.42 8443 v1.34.1 crio true true} ...
	I1019 17:18:29.497557  311731 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=pause-046984 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.42
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:pause-046984 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1019 17:18:29.497631  311731 ssh_runner.go:195] Run: crio config
	I1019 17:18:29.553720  311731 cni.go:84] Creating CNI manager for ""
	I1019 17:18:29.553749  311731 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1019 17:18:29.553775  311731 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1019 17:18:29.553809  311731 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.42 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-046984 NodeName:pause-046984 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.42"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.42 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubern
etes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1019 17:18:29.553996  311731 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.42
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-046984"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.42"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.42"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1019 17:18:29.554071  311731 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1019 17:18:29.569386  311731 binaries.go:44] Found k8s binaries, skipping transfer
	I1019 17:18:29.569446  311731 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1019 17:18:29.582682  311731 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (311 bytes)
	I1019 17:18:29.603904  311731 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1019 17:18:29.625859  311731 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2212 bytes)
	I1019 17:18:29.647904  311731 ssh_runner.go:195] Run: grep 192.168.39.42	control-plane.minikube.internal$ /etc/hosts
	I1019 17:18:29.652311  311731 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1019 17:18:29.880783  311731 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1019 17:18:29.906473  311731 certs.go:69] Setting up /home/jenkins/minikube-integration/21683-274250/.minikube/profiles/pause-046984 for IP: 192.168.39.42
	I1019 17:18:29.906493  311731 certs.go:195] generating shared ca certs ...
	I1019 17:18:29.906507  311731 certs.go:227] acquiring lock for ca certs: {Name:mk7795547103f90561160e6fc6ada1c3a2cc6617 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 17:18:29.906672  311731 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21683-274250/.minikube/ca.key
	I1019 17:18:29.906731  311731 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21683-274250/.minikube/proxy-client-ca.key
	I1019 17:18:29.906745  311731 certs.go:257] generating profile certs ...
	I1019 17:18:29.906858  311731 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21683-274250/.minikube/profiles/pause-046984/client.key
	I1019 17:18:29.906934  311731 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21683-274250/.minikube/profiles/pause-046984/apiserver.key.66423b66
	I1019 17:18:29.907001  311731 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21683-274250/.minikube/profiles/pause-046984/proxy-client.key
	I1019 17:18:29.907174  311731 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-274250/.minikube/certs/278280.pem (1338 bytes)
	W1019 17:18:29.907228  311731 certs.go:480] ignoring /home/jenkins/minikube-integration/21683-274250/.minikube/certs/278280_empty.pem, impossibly tiny 0 bytes
	I1019 17:18:29.907241  311731 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-274250/.minikube/certs/ca-key.pem (1675 bytes)
	I1019 17:18:29.907282  311731 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-274250/.minikube/certs/ca.pem (1082 bytes)
	I1019 17:18:29.907311  311731 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-274250/.minikube/certs/cert.pem (1123 bytes)
	I1019 17:18:29.907356  311731 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-274250/.minikube/certs/key.pem (1675 bytes)
	I1019 17:18:29.907422  311731 certs.go:484] found cert: /home/jenkins/minikube-integration/21683-274250/.minikube/files/etc/ssl/certs/2782802.pem (1708 bytes)
	I1019 17:18:29.908272  311731 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-274250/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1019 17:18:29.945186  311731 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-274250/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1019 17:18:29.979278  311731 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-274250/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1019 17:18:30.013019  311731 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-274250/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1019 17:18:30.051913  311731 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-274250/.minikube/profiles/pause-046984/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1019 17:18:30.095643  311731 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-274250/.minikube/profiles/pause-046984/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1019 17:18:30.131566  311731 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-274250/.minikube/profiles/pause-046984/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1019 17:18:30.164564  311731 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-274250/.minikube/profiles/pause-046984/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1019 17:18:30.205076  311731 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-274250/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1019 17:18:30.235211  311731 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-274250/.minikube/certs/278280.pem --> /usr/share/ca-certificates/278280.pem (1338 bytes)
	I1019 17:18:30.272453  311731 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21683-274250/.minikube/files/etc/ssl/certs/2782802.pem --> /usr/share/ca-certificates/2782802.pem (1708 bytes)
	I1019 17:18:30.305319  311731 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1019 17:18:30.334717  311731 ssh_runner.go:195] Run: openssl version
	I1019 17:18:30.344341  311731 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2782802.pem && ln -fs /usr/share/ca-certificates/2782802.pem /etc/ssl/certs/2782802.pem"
	I1019 17:18:30.364218  311731 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2782802.pem
	I1019 17:18:30.371732  311731 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 19 16:31 /usr/share/ca-certificates/2782802.pem
	I1019 17:18:30.371791  311731 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2782802.pem
	I1019 17:18:30.381646  311731 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2782802.pem /etc/ssl/certs/3ec20f2e.0"
	I1019 17:18:30.397407  311731 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1019 17:18:30.413772  311731 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1019 17:18:30.420903  311731 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 19 16:22 /usr/share/ca-certificates/minikubeCA.pem
	I1019 17:18:30.420976  311731 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1019 17:18:30.435318  311731 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1019 17:18:30.448510  311731 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/278280.pem && ln -fs /usr/share/ca-certificates/278280.pem /etc/ssl/certs/278280.pem"
	I1019 17:18:30.466614  311731 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/278280.pem
	I1019 17:18:30.473538  311731 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 19 16:31 /usr/share/ca-certificates/278280.pem
	I1019 17:18:30.473616  311731 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/278280.pem
	I1019 17:18:30.481687  311731 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/278280.pem /etc/ssl/certs/51391683.0"
	I1019 17:18:30.497876  311731 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1019 17:18:30.505880  311731 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1019 17:18:30.515255  311731 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1019 17:18:30.525714  311731 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1019 17:18:30.536107  311731 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1019 17:18:30.545548  311731 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1019 17:18:30.553421  311731 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1019 17:18:30.562330  311731 kubeadm.go:401] StartCluster: {Name:pause-046984 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 Cl
usterName:pause-046984 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.42 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-g
pu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1019 17:18:30.562487  311731 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1019 17:18:30.562561  311731 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1019 17:18:30.613906  311731 cri.go:89] found id: "6ca460e4a893443d5e31ba0c33acef332ba3e98264273cae75f510d535dd8de4"
	I1019 17:18:30.613930  311731 cri.go:89] found id: "b4d75c20f540e595b1556128fd9bdec9bb473b7879f38e05c37ddb5be92d5533"
	I1019 17:18:30.613936  311731 cri.go:89] found id: "d94639566f241bd36a3beb47c8dc56cbc896b815c911ed276556b64c475ca4f6"
	I1019 17:18:30.613941  311731 cri.go:89] found id: "ee3dc4c66b7c4d68572ee374d8c9eb458e56d7c7883b1bfa63c9da942534f88a"
	I1019 17:18:30.613945  311731 cri.go:89] found id: "180dd09c14df72f6831b149aba1874f3461a6782e41b9c9f7b85c35a6f96b5a9"
	I1019 17:18:30.613950  311731 cri.go:89] found id: "4944d9dc741ea8a621669129b6109f09dd4b8b5e7258461f2d839fd79cc8b72b"
	I1019 17:18:30.613955  311731 cri.go:89] found id: "e1db51aade13aefd316b3e0b486624a97b3cfe0b9db81e86030117f7012a0d3d"
	I1019 17:18:30.613959  311731 cri.go:89] found id: "33c1092e2a55ecd36e6cf0e56d1dcb142b72c47ac72947b60858a482ba61b4c9"
	I1019 17:18:30.613962  311731 cri.go:89] found id: "52af4ac4b377c6eaf94821fd8c66ddbb46a218565d3af7fe63dce5c1f204cbe1"
	I1019 17:18:30.613974  311731 cri.go:89] found id: "fec2ae57cb5bdcdc7f91c5c9fda6a7ea5d44b7bc471807c9e6357e0321313e1d"
	I1019 17:18:30.613995  311731 cri.go:89] found id: "b8e1d8ad2a6fdc864dc930f77554a1b37c38527c3713105ef6904f77857c5da3"
	I1019 17:18:30.614000  311731 cri.go:89] found id: "4c5676d2f1802b60a78c407a46253e3424fda135a1aaad85f64e0a4ded9cc112"
	I1019 17:18:30.614004  311731 cri.go:89] found id: ""
	I1019 17:18:30.614057  311731 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
** /stderr **
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-046984 -n pause-046984
helpers_test.go:252: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p pause-046984 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p pause-046984 logs -n 25: (1.396701011s)
helpers_test.go:260: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                ARGS                                                                                │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image   │ test-preload-360119 image list                                                                                                                                     │ test-preload-360119       │ jenkins │ v1.37.0 │ 19 Oct 25 17:14 UTC │ 19 Oct 25 17:14 UTC │
	│ delete  │ -p test-preload-360119                                                                                                                                             │ test-preload-360119       │ jenkins │ v1.37.0 │ 19 Oct 25 17:14 UTC │ 19 Oct 25 17:14 UTC │
	│ start   │ -p scheduled-stop-593188 --memory=3072 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                                         │ scheduled-stop-593188     │ jenkins │ v1.37.0 │ 19 Oct 25 17:14 UTC │ 19 Oct 25 17:15 UTC │
	│ stop    │ -p scheduled-stop-593188 --schedule 5m                                                                                                                             │ scheduled-stop-593188     │ jenkins │ v1.37.0 │ 19 Oct 25 17:15 UTC │                     │
	│ stop    │ -p scheduled-stop-593188 --schedule 5m                                                                                                                             │ scheduled-stop-593188     │ jenkins │ v1.37.0 │ 19 Oct 25 17:15 UTC │                     │
	│ stop    │ -p scheduled-stop-593188 --schedule 5m                                                                                                                             │ scheduled-stop-593188     │ jenkins │ v1.37.0 │ 19 Oct 25 17:15 UTC │                     │
	│ stop    │ -p scheduled-stop-593188 --schedule 15s                                                                                                                            │ scheduled-stop-593188     │ jenkins │ v1.37.0 │ 19 Oct 25 17:15 UTC │                     │
	│ stop    │ -p scheduled-stop-593188 --schedule 15s                                                                                                                            │ scheduled-stop-593188     │ jenkins │ v1.37.0 │ 19 Oct 25 17:15 UTC │                     │
	│ stop    │ -p scheduled-stop-593188 --schedule 15s                                                                                                                            │ scheduled-stop-593188     │ jenkins │ v1.37.0 │ 19 Oct 25 17:15 UTC │                     │
	│ stop    │ -p scheduled-stop-593188 --cancel-scheduled                                                                                                                        │ scheduled-stop-593188     │ jenkins │ v1.37.0 │ 19 Oct 25 17:15 UTC │ 19 Oct 25 17:15 UTC │
	│ stop    │ -p scheduled-stop-593188 --schedule 15s                                                                                                                            │ scheduled-stop-593188     │ jenkins │ v1.37.0 │ 19 Oct 25 17:15 UTC │                     │
	│ stop    │ -p scheduled-stop-593188 --schedule 15s                                                                                                                            │ scheduled-stop-593188     │ jenkins │ v1.37.0 │ 19 Oct 25 17:15 UTC │                     │
	│ stop    │ -p scheduled-stop-593188 --schedule 15s                                                                                                                            │ scheduled-stop-593188     │ jenkins │ v1.37.0 │ 19 Oct 25 17:15 UTC │ 19 Oct 25 17:16 UTC │
	│ delete  │ -p scheduled-stop-593188                                                                                                                                           │ scheduled-stop-593188     │ jenkins │ v1.37.0 │ 19 Oct 25 17:16 UTC │ 19 Oct 25 17:16 UTC │
	│ start   │ -p pause-046984 --memory=3072 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                │ pause-046984              │ jenkins │ v1.37.0 │ 19 Oct 25 17:16 UTC │ 19 Oct 25 17:17 UTC │
	│ start   │ -p force-systemd-env-064535 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                               │ force-systemd-env-064535  │ jenkins │ v1.37.0 │ 19 Oct 25 17:16 UTC │ 19 Oct 25 17:17 UTC │
	│ start   │ -p offline-crio-033291 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                        │ offline-crio-033291       │ jenkins │ v1.37.0 │ 19 Oct 25 17:16 UTC │ 19 Oct 25 17:18 UTC │
	│ start   │ -p cert-expiration-067580 --memory=3072 --cert-expiration=3m --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                   │ cert-expiration-067580    │ jenkins │ v1.37.0 │ 19 Oct 25 17:16 UTC │ 19 Oct 25 17:18 UTC │
	│ delete  │ -p force-systemd-env-064535                                                                                                                                        │ force-systemd-env-064535  │ jenkins │ v1.37.0 │ 19 Oct 25 17:17 UTC │ 19 Oct 25 17:17 UTC │
	│ start   │ -p kubernetes-upgrade-755918 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false │ kubernetes-upgrade-755918 │ jenkins │ v1.37.0 │ 19 Oct 25 17:17 UTC │ 19 Oct 25 17:18 UTC │
	│ start   │ -p pause-046984 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                                         │ pause-046984              │ jenkins │ v1.37.0 │ 19 Oct 25 17:17 UTC │ 19 Oct 25 17:18 UTC │
	│ delete  │ -p offline-crio-033291                                                                                                                                             │ offline-crio-033291       │ jenkins │ v1.37.0 │ 19 Oct 25 17:18 UTC │ 19 Oct 25 17:18 UTC │
	│ start   │ -p stopped-upgrade-254072 --memory=3072 --vm-driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                                     │ stopped-upgrade-254072    │ jenkins │ v1.32.0 │ 19 Oct 25 17:18 UTC │                     │
	│ stop    │ -p kubernetes-upgrade-755918                                                                                                                                       │ kubernetes-upgrade-755918 │ jenkins │ v1.37.0 │ 19 Oct 25 17:18 UTC │ 19 Oct 25 17:18 UTC │
	│ start   │ -p kubernetes-upgrade-755918 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false │ kubernetes-upgrade-755918 │ jenkins │ v1.37.0 │ 19 Oct 25 17:18 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/19 17:18:34
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1019 17:18:34.554242  312205 out.go:360] Setting OutFile to fd 1 ...
	I1019 17:18:34.554340  312205 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 17:18:34.554345  312205 out.go:374] Setting ErrFile to fd 2...
	I1019 17:18:34.554348  312205 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 17:18:34.554687  312205 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21683-274250/.minikube/bin
	I1019 17:18:34.555209  312205 out.go:368] Setting JSON to false
	I1019 17:18:34.556150  312205 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":10857,"bootTime":1760883458,"procs":197,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1019 17:18:34.556346  312205 start.go:143] virtualization: kvm guest
	I1019 17:18:34.559166  312205 out.go:179] * [kubernetes-upgrade-755918] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1019 17:18:34.560411  312205 notify.go:221] Checking for updates...
	I1019 17:18:34.560438  312205 out.go:179]   - MINIKUBE_LOCATION=21683
	I1019 17:18:34.561638  312205 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1019 17:18:34.562761  312205 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21683-274250/kubeconfig
	I1019 17:18:34.563846  312205 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21683-274250/.minikube
	I1019 17:18:34.564805  312205 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1019 17:18:34.565751  312205 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1019 17:18:34.567043  312205 config.go:182] Loaded profile config "kubernetes-upgrade-755918": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1019 17:18:34.567649  312205 main.go:143] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1019 17:18:34.567703  312205 main.go:143] libmachine: Launching plugin server for driver kvm2
	I1019 17:18:34.582491  312205 main.go:143] libmachine: Plugin server listening at address 127.0.0.1:41591
	I1019 17:18:34.583209  312205 main.go:143] libmachine: () Calling .GetVersion
	I1019 17:18:34.583774  312205 main.go:143] libmachine: Using API Version  1
	I1019 17:18:34.583799  312205 main.go:143] libmachine: () Calling .SetConfigRaw
	I1019 17:18:34.584237  312205 main.go:143] libmachine: () Calling .GetMachineName
	I1019 17:18:34.584470  312205 main.go:143] libmachine: (kubernetes-upgrade-755918) Calling .DriverName
	I1019 17:18:34.584749  312205 driver.go:422] Setting default libvirt URI to qemu:///system
	I1019 17:18:34.585202  312205 main.go:143] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1019 17:18:34.585252  312205 main.go:143] libmachine: Launching plugin server for driver kvm2
	I1019 17:18:34.598512  312205 main.go:143] libmachine: Plugin server listening at address 127.0.0.1:44861
	I1019 17:18:34.598967  312205 main.go:143] libmachine: () Calling .GetVersion
	I1019 17:18:34.599452  312205 main.go:143] libmachine: Using API Version  1
	I1019 17:18:34.599494  312205 main.go:143] libmachine: () Calling .SetConfigRaw
	I1019 17:18:34.599864  312205 main.go:143] libmachine: () Calling .GetMachineName
	I1019 17:18:34.600100  312205 main.go:143] libmachine: (kubernetes-upgrade-755918) Calling .DriverName
	I1019 17:18:34.639130  312205 out.go:179] * Using the kvm2 driver based on existing profile
	I1019 17:18:34.640166  312205 start.go:309] selected driver: kvm2
	I1019 17:18:34.640189  312205 start.go:930] validating driver "kvm2" against &{Name:kubernetes-upgrade-755918 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.28.0 ClusterName:kubernetes-upgrade-755918 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.129 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bi
naryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1019 17:18:34.640326  312205 start.go:941] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1019 17:18:34.641412  312205 install.go:66] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1019 17:18:34.641517  312205 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21683-274250/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1019 17:18:34.657546  312205 install.go:163] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I1019 17:18:34.657593  312205 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21683-274250/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1019 17:18:34.671644  312205 install.go:163] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I1019 17:18:34.672294  312205 cni.go:84] Creating CNI manager for ""
	I1019 17:18:34.672384  312205 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1019 17:18:34.672446  312205 start.go:353] cluster config:
	{Name:kubernetes-upgrade-755918 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:kubernetes-upgrade-755918 Namespace:d
efault APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.129 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:
false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1019 17:18:34.672666  312205 iso.go:125] acquiring lock: {Name:mk7c0069e2cf0a68d4955dec96c59ff341a488dd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1019 17:18:34.674481  312205 out.go:179] * Starting "kubernetes-upgrade-755918" primary control-plane node in "kubernetes-upgrade-755918" cluster
	I1019 17:18:34.675573  312205 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1019 17:18:34.675633  312205 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21683-274250/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1019 17:18:34.675649  312205 cache.go:59] Caching tarball of preloaded images
	I1019 17:18:34.675793  312205 preload.go:233] Found /home/jenkins/minikube-integration/21683-274250/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1019 17:18:34.675821  312205 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1019 17:18:34.675925  312205 profile.go:143] Saving config to /home/jenkins/minikube-integration/21683-274250/.minikube/profiles/kubernetes-upgrade-755918/config.json ...
	I1019 17:18:34.676160  312205 start.go:360] acquireMachinesLock for kubernetes-upgrade-755918: {Name:mk3b19946e20646ec6cf08c56ebb92a1f48fa1bb Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1019 17:18:34.676218  312205 start.go:364] duration metric: took 32.439µs to acquireMachinesLock for "kubernetes-upgrade-755918"
	I1019 17:18:34.676240  312205 start.go:96] Skipping create...Using existing machine configuration
	I1019 17:18:34.676249  312205 fix.go:54] fixHost starting: 
	I1019 17:18:34.676548  312205 main.go:143] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1019 17:18:34.676587  312205 main.go:143] libmachine: Launching plugin server for driver kvm2
	I1019 17:18:34.690872  312205 main.go:143] libmachine: Plugin server listening at address 127.0.0.1:41309
	I1019 17:18:34.691446  312205 main.go:143] libmachine: () Calling .GetVersion
	I1019 17:18:34.691915  312205 main.go:143] libmachine: Using API Version  1
	I1019 17:18:34.691936  312205 main.go:143] libmachine: () Calling .SetConfigRaw
	I1019 17:18:34.692365  312205 main.go:143] libmachine: () Calling .GetMachineName
	I1019 17:18:34.692575  312205 main.go:143] libmachine: (kubernetes-upgrade-755918) Calling .DriverName
	I1019 17:18:34.692775  312205 main.go:143] libmachine: (kubernetes-upgrade-755918) Calling .GetState
	I1019 17:18:34.694821  312205 fix.go:112] recreateIfNeeded on kubernetes-upgrade-755918: state=Stopped err=<nil>
	I1019 17:18:34.694871  312205 main.go:143] libmachine: (kubernetes-upgrade-755918) Calling .DriverName
	W1019 17:18:34.695059  312205 fix.go:138] unexpected machine state, will restart: <nil>
	I1019 17:18:32.264341  311731 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.179582689s)
	I1019 17:18:32.264409  311731 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1019 17:18:32.578794  311731 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1019 17:18:32.665483  311731 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1019 17:18:32.773184  311731 api_server.go:52] waiting for apiserver process to appear ...
	I1019 17:18:32.773280  311731 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1019 17:18:33.273421  311731 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1019 17:18:33.774107  311731 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1019 17:18:33.808910  311731 api_server.go:72] duration metric: took 1.035739574s to wait for apiserver process to appear ...
	I1019 17:18:33.808954  311731 api_server.go:88] waiting for apiserver healthz status ...
	I1019 17:18:33.809009  311731 api_server.go:253] Checking apiserver healthz at https://192.168.39.42:8443/healthz ...
	I1019 17:18:33.809669  311731 api_server.go:269] stopped: https://192.168.39.42:8443/healthz: Get "https://192.168.39.42:8443/healthz": dial tcp 192.168.39.42:8443: connect: connection refused
	I1019 17:18:34.309080  311731 api_server.go:253] Checking apiserver healthz at https://192.168.39.42:8443/healthz ...
	I1019 17:18:36.327971  311731 api_server.go:279] https://192.168.39.42:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1019 17:18:36.328025  311731 api_server.go:103] status: https://192.168.39.42:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1019 17:18:36.328045  311731 api_server.go:253] Checking apiserver healthz at https://192.168.39.42:8443/healthz ...
	I1019 17:18:36.364369  311731 api_server.go:279] https://192.168.39.42:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1019 17:18:36.364402  311731 api_server.go:103] status: https://192.168.39.42:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1019 17:18:36.809644  311731 api_server.go:253] Checking apiserver healthz at https://192.168.39.42:8443/healthz ...
	I1019 17:18:36.818067  311731 api_server.go:279] https://192.168.39.42:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1019 17:18:36.818102  311731 api_server.go:103] status: https://192.168.39.42:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1019 17:18:37.309581  311731 api_server.go:253] Checking apiserver healthz at https://192.168.39.42:8443/healthz ...
	I1019 17:18:37.316933  311731 api_server.go:279] https://192.168.39.42:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1019 17:18:37.316968  311731 api_server.go:103] status: https://192.168.39.42:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1019 17:18:37.809679  311731 api_server.go:253] Checking apiserver healthz at https://192.168.39.42:8443/healthz ...
	I1019 17:18:37.815234  311731 api_server.go:279] https://192.168.39.42:8443/healthz returned 200:
	ok
	I1019 17:18:37.822834  311731 api_server.go:141] control plane version: v1.34.1
	I1019 17:18:37.822867  311731 api_server.go:131] duration metric: took 4.013901901s to wait for apiserver health ...
	I1019 17:18:37.822880  311731 cni.go:84] Creating CNI manager for ""
	I1019 17:18:37.822889  311731 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1019 17:18:37.824529  311731 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I1019 17:18:37.825744  311731 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1019 17:18:37.844782  311731 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1019 17:18:37.869522  311731 system_pods.go:43] waiting for kube-system pods to appear ...
	I1019 17:18:37.874784  311731 system_pods.go:59] 6 kube-system pods found
	I1019 17:18:37.874829  311731 system_pods.go:61] "coredns-66bc5c9577-z9rqv" [7655a35b-ffaf-424b-8a40-627a6a3e5b1e] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1019 17:18:37.874837  311731 system_pods.go:61] "etcd-pause-046984" [b9d1bfc4-d889-4919-8387-11ce6083bf8f] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1019 17:18:37.874848  311731 system_pods.go:61] "kube-apiserver-pause-046984" [d3ffb7b1-34e4-4e0f-88ea-20958de7b2fd] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1019 17:18:37.874858  311731 system_pods.go:61] "kube-controller-manager-pause-046984" [461316d6-bb1e-4450-b216-959f836a75fb] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1019 17:18:37.874871  311731 system_pods.go:61] "kube-proxy-mnsqf" [bcef04ef-3072-4b46-becb-1e7804e25d88] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1019 17:18:37.874881  311731 system_pods.go:61] "kube-scheduler-pause-046984" [5ac163ed-c77e-4b33-8743-2e16841ec8ce] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1019 17:18:37.874893  311731 system_pods.go:74] duration metric: took 5.349375ms to wait for pod list to return data ...
	I1019 17:18:37.874904  311731 node_conditions.go:102] verifying NodePressure condition ...
	I1019 17:18:37.878487  311731 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1019 17:18:37.878519  311731 node_conditions.go:123] node cpu capacity is 2
	I1019 17:18:37.878536  311731 node_conditions.go:105] duration metric: took 3.625884ms to run NodePressure ...
	I1019 17:18:37.878598  311731 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1019 17:18:38.144840  311731 kubeadm.go:729] waiting for restarted kubelet to initialise ...
	I1019 17:18:38.149651  311731 kubeadm.go:744] kubelet initialised
	I1019 17:18:38.149673  311731 kubeadm.go:745] duration metric: took 4.805641ms waiting for restarted kubelet to initialise ...
	I1019 17:18:38.149689  311731 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1019 17:18:38.166874  311731 ops.go:34] apiserver oom_adj: -16
	I1019 17:18:38.166894  311731 kubeadm.go:602] duration metric: took 7.479990694s to restartPrimaryControlPlane
	I1019 17:18:38.166903  311731 kubeadm.go:403] duration metric: took 7.604585127s to StartCluster
	I1019 17:18:38.166925  311731 settings.go:142] acquiring lock: {Name:mkf8e8333d0302d1bf1fad4a2ff30b0524cb52b1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 17:18:38.167019  311731 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21683-274250/kubeconfig
	I1019 17:18:38.168217  311731 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-274250/kubeconfig: {Name:mk22311d445eddc7a50c63a1389fab4cf9c803b2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 17:18:38.168482  311731 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.39.42 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1019 17:18:38.168544  311731 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1019 17:18:38.168809  311731 config.go:182] Loaded profile config "pause-046984": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 17:18:38.170291  311731 out.go:179] * Verifying Kubernetes components...
	I1019 17:18:38.170987  311731 out.go:179] * Enabled addons: 
	I1019 17:18:34.696671  312205 out.go:252] * Restarting existing kvm2 VM for "kubernetes-upgrade-755918" ...
	I1019 17:18:34.696706  312205 main.go:143] libmachine: (kubernetes-upgrade-755918) Calling .Start
	I1019 17:18:34.696892  312205 main.go:143] libmachine: (kubernetes-upgrade-755918) starting domain...
	I1019 17:18:34.696919  312205 main.go:143] libmachine: (kubernetes-upgrade-755918) ensuring networks are active...
	I1019 17:18:34.697741  312205 main.go:143] libmachine: (kubernetes-upgrade-755918) Ensuring network default is active
	I1019 17:18:34.698254  312205 main.go:143] libmachine: (kubernetes-upgrade-755918) Ensuring network mk-kubernetes-upgrade-755918 is active
	I1019 17:18:34.698734  312205 main.go:143] libmachine: (kubernetes-upgrade-755918) getting domain XML...
	I1019 17:18:34.699898  312205 main.go:143] libmachine: (kubernetes-upgrade-755918) DBG | starting domain XML:
	I1019 17:18:34.699943  312205 main.go:143] libmachine: (kubernetes-upgrade-755918) DBG | <domain type='kvm'>
	I1019 17:18:34.699955  312205 main.go:143] libmachine: (kubernetes-upgrade-755918) DBG |   <name>kubernetes-upgrade-755918</name>
	I1019 17:18:34.699963  312205 main.go:143] libmachine: (kubernetes-upgrade-755918) DBG |   <uuid>75d5236c-07d1-42f1-90c0-4c47e14e6c1c</uuid>
	I1019 17:18:34.699971  312205 main.go:143] libmachine: (kubernetes-upgrade-755918) DBG |   <memory unit='KiB'>3145728</memory>
	I1019 17:18:34.699996  312205 main.go:143] libmachine: (kubernetes-upgrade-755918) DBG |   <currentMemory unit='KiB'>3145728</currentMemory>
	I1019 17:18:34.700007  312205 main.go:143] libmachine: (kubernetes-upgrade-755918) DBG |   <vcpu placement='static'>2</vcpu>
	I1019 17:18:34.700015  312205 main.go:143] libmachine: (kubernetes-upgrade-755918) DBG |   <os>
	I1019 17:18:34.700025  312205 main.go:143] libmachine: (kubernetes-upgrade-755918) DBG |     <type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
	I1019 17:18:34.700045  312205 main.go:143] libmachine: (kubernetes-upgrade-755918) DBG |     <boot dev='cdrom'/>
	I1019 17:18:34.700055  312205 main.go:143] libmachine: (kubernetes-upgrade-755918) DBG |     <boot dev='hd'/>
	I1019 17:18:34.700063  312205 main.go:143] libmachine: (kubernetes-upgrade-755918) DBG |     <bootmenu enable='no'/>
	I1019 17:18:34.700071  312205 main.go:143] libmachine: (kubernetes-upgrade-755918) DBG |   </os>
	I1019 17:18:34.700078  312205 main.go:143] libmachine: (kubernetes-upgrade-755918) DBG |   <features>
	I1019 17:18:34.700086  312205 main.go:143] libmachine: (kubernetes-upgrade-755918) DBG |     <acpi/>
	I1019 17:18:34.700093  312205 main.go:143] libmachine: (kubernetes-upgrade-755918) DBG |     <apic/>
	I1019 17:18:34.700101  312205 main.go:143] libmachine: (kubernetes-upgrade-755918) DBG |     <pae/>
	I1019 17:18:34.700108  312205 main.go:143] libmachine: (kubernetes-upgrade-755918) DBG |   </features>
	I1019 17:18:34.700119  312205 main.go:143] libmachine: (kubernetes-upgrade-755918) DBG |   <cpu mode='host-passthrough' check='none' migratable='on'/>
	I1019 17:18:34.700127  312205 main.go:143] libmachine: (kubernetes-upgrade-755918) DBG |   <clock offset='utc'/>
	I1019 17:18:34.700135  312205 main.go:143] libmachine: (kubernetes-upgrade-755918) DBG |   <on_poweroff>destroy</on_poweroff>
	I1019 17:18:34.700142  312205 main.go:143] libmachine: (kubernetes-upgrade-755918) DBG |   <on_reboot>restart</on_reboot>
	I1019 17:18:34.700150  312205 main.go:143] libmachine: (kubernetes-upgrade-755918) DBG |   <on_crash>destroy</on_crash>
	I1019 17:18:34.700156  312205 main.go:143] libmachine: (kubernetes-upgrade-755918) DBG |   <devices>
	I1019 17:18:34.700165  312205 main.go:143] libmachine: (kubernetes-upgrade-755918) DBG |     <emulator>/usr/bin/qemu-system-x86_64</emulator>
	I1019 17:18:34.700172  312205 main.go:143] libmachine: (kubernetes-upgrade-755918) DBG |     <disk type='file' device='cdrom'>
	I1019 17:18:34.700190  312205 main.go:143] libmachine: (kubernetes-upgrade-755918) DBG |       <driver name='qemu' type='raw'/>
	I1019 17:18:34.700206  312205 main.go:143] libmachine: (kubernetes-upgrade-755918) DBG |       <source file='/home/jenkins/minikube-integration/21683-274250/.minikube/machines/kubernetes-upgrade-755918/boot2docker.iso'/>
	I1019 17:18:34.700216  312205 main.go:143] libmachine: (kubernetes-upgrade-755918) DBG |       <target dev='hdc' bus='scsi'/>
	I1019 17:18:34.700223  312205 main.go:143] libmachine: (kubernetes-upgrade-755918) DBG |       <readonly/>
	I1019 17:18:34.700233  312205 main.go:143] libmachine: (kubernetes-upgrade-755918) DBG |       <address type='drive' controller='0' bus='0' target='0' unit='2'/>
	I1019 17:18:34.700240  312205 main.go:143] libmachine: (kubernetes-upgrade-755918) DBG |     </disk>
	I1019 17:18:34.700248  312205 main.go:143] libmachine: (kubernetes-upgrade-755918) DBG |     <disk type='file' device='disk'>
	I1019 17:18:34.700257  312205 main.go:143] libmachine: (kubernetes-upgrade-755918) DBG |       <driver name='qemu' type='raw' io='threads'/>
	I1019 17:18:34.700280  312205 main.go:143] libmachine: (kubernetes-upgrade-755918) DBG |       <source file='/home/jenkins/minikube-integration/21683-274250/.minikube/machines/kubernetes-upgrade-755918/kubernetes-upgrade-755918.rawdisk'/>
	I1019 17:18:34.700287  312205 main.go:143] libmachine: (kubernetes-upgrade-755918) DBG |       <target dev='hda' bus='virtio'/>
	I1019 17:18:34.700298  312205 main.go:143] libmachine: (kubernetes-upgrade-755918) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
	I1019 17:18:34.700305  312205 main.go:143] libmachine: (kubernetes-upgrade-755918) DBG |     </disk>
	I1019 17:18:34.700314  312205 main.go:143] libmachine: (kubernetes-upgrade-755918) DBG |     <controller type='usb' index='0' model='piix3-uhci'>
	I1019 17:18:34.700324  312205 main.go:143] libmachine: (kubernetes-upgrade-755918) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
	I1019 17:18:34.700332  312205 main.go:143] libmachine: (kubernetes-upgrade-755918) DBG |     </controller>
	I1019 17:18:34.700339  312205 main.go:143] libmachine: (kubernetes-upgrade-755918) DBG |     <controller type='pci' index='0' model='pci-root'/>
	I1019 17:18:34.700355  312205 main.go:143] libmachine: (kubernetes-upgrade-755918) DBG |     <controller type='scsi' index='0' model='lsilogic'>
	I1019 17:18:34.700364  312205 main.go:143] libmachine: (kubernetes-upgrade-755918) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
	I1019 17:18:34.700372  312205 main.go:143] libmachine: (kubernetes-upgrade-755918) DBG |     </controller>
	I1019 17:18:34.700379  312205 main.go:143] libmachine: (kubernetes-upgrade-755918) DBG |     <interface type='network'>
	I1019 17:18:34.700388  312205 main.go:143] libmachine: (kubernetes-upgrade-755918) DBG |       <mac address='52:54:00:93:e7:4d'/>
	I1019 17:18:34.700395  312205 main.go:143] libmachine: (kubernetes-upgrade-755918) DBG |       <source network='mk-kubernetes-upgrade-755918'/>
	I1019 17:18:34.700403  312205 main.go:143] libmachine: (kubernetes-upgrade-755918) DBG |       <model type='virtio'/>
	I1019 17:18:34.700412  312205 main.go:143] libmachine: (kubernetes-upgrade-755918) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
	I1019 17:18:34.700419  312205 main.go:143] libmachine: (kubernetes-upgrade-755918) DBG |     </interface>
	I1019 17:18:34.700426  312205 main.go:143] libmachine: (kubernetes-upgrade-755918) DBG |     <interface type='network'>
	I1019 17:18:34.700437  312205 main.go:143] libmachine: (kubernetes-upgrade-755918) DBG |       <mac address='52:54:00:c3:f9:8d'/>
	I1019 17:18:34.700444  312205 main.go:143] libmachine: (kubernetes-upgrade-755918) DBG |       <source network='default'/>
	I1019 17:18:34.700452  312205 main.go:143] libmachine: (kubernetes-upgrade-755918) DBG |       <model type='virtio'/>
	I1019 17:18:34.700461  312205 main.go:143] libmachine: (kubernetes-upgrade-755918) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
	I1019 17:18:34.700469  312205 main.go:143] libmachine: (kubernetes-upgrade-755918) DBG |     </interface>
	I1019 17:18:34.700482  312205 main.go:143] libmachine: (kubernetes-upgrade-755918) DBG |     <serial type='pty'>
	I1019 17:18:34.700493  312205 main.go:143] libmachine: (kubernetes-upgrade-755918) DBG |       <target type='isa-serial' port='0'>
	I1019 17:18:34.700499  312205 main.go:143] libmachine: (kubernetes-upgrade-755918) DBG |         <model name='isa-serial'/>
	I1019 17:18:34.700507  312205 main.go:143] libmachine: (kubernetes-upgrade-755918) DBG |       </target>
	I1019 17:18:34.700512  312205 main.go:143] libmachine: (kubernetes-upgrade-755918) DBG |     </serial>
	I1019 17:18:34.700519  312205 main.go:143] libmachine: (kubernetes-upgrade-755918) DBG |     <console type='pty'>
	I1019 17:18:34.700525  312205 main.go:143] libmachine: (kubernetes-upgrade-755918) DBG |       <target type='serial' port='0'/>
	I1019 17:18:34.700534  312205 main.go:143] libmachine: (kubernetes-upgrade-755918) DBG |     </console>
	I1019 17:18:34.700540  312205 main.go:143] libmachine: (kubernetes-upgrade-755918) DBG |     <input type='mouse' bus='ps2'/>
	I1019 17:18:34.700547  312205 main.go:143] libmachine: (kubernetes-upgrade-755918) DBG |     <input type='keyboard' bus='ps2'/>
	I1019 17:18:34.700553  312205 main.go:143] libmachine: (kubernetes-upgrade-755918) DBG |     <audio id='1' type='none'/>
	I1019 17:18:34.700560  312205 main.go:143] libmachine: (kubernetes-upgrade-755918) DBG |     <memballoon model='virtio'>
	I1019 17:18:34.700568  312205 main.go:143] libmachine: (kubernetes-upgrade-755918) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
	I1019 17:18:34.700576  312205 main.go:143] libmachine: (kubernetes-upgrade-755918) DBG |     </memballoon>
	I1019 17:18:34.700582  312205 main.go:143] libmachine: (kubernetes-upgrade-755918) DBG |     <rng model='virtio'>
	I1019 17:18:34.700591  312205 main.go:143] libmachine: (kubernetes-upgrade-755918) DBG |       <backend model='random'>/dev/random</backend>
	I1019 17:18:34.700599  312205 main.go:143] libmachine: (kubernetes-upgrade-755918) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
	I1019 17:18:34.700608  312205 main.go:143] libmachine: (kubernetes-upgrade-755918) DBG |     </rng>
	I1019 17:18:34.700625  312205 main.go:143] libmachine: (kubernetes-upgrade-755918) DBG |   </devices>
	I1019 17:18:34.700633  312205 main.go:143] libmachine: (kubernetes-upgrade-755918) DBG | </domain>
	I1019 17:18:34.700641  312205 main.go:143] libmachine: (kubernetes-upgrade-755918) DBG | 
	I1019 17:18:36.207669  312205 main.go:143] libmachine: (kubernetes-upgrade-755918) waiting for domain to start...
	I1019 17:18:36.209248  312205 main.go:143] libmachine: (kubernetes-upgrade-755918) domain is now running
	I1019 17:18:36.209278  312205 main.go:143] libmachine: (kubernetes-upgrade-755918) waiting for IP...
	I1019 17:18:36.210248  312205 main.go:143] libmachine: (kubernetes-upgrade-755918) DBG | domain kubernetes-upgrade-755918 has defined MAC address 52:54:00:93:e7:4d in network mk-kubernetes-upgrade-755918
	I1019 17:18:36.210890  312205 main.go:143] libmachine: (kubernetes-upgrade-755918) found domain IP: 192.168.50.129
	I1019 17:18:36.210917  312205 main.go:143] libmachine: (kubernetes-upgrade-755918) DBG | domain kubernetes-upgrade-755918 has current primary IP address 192.168.50.129 and MAC address 52:54:00:93:e7:4d in network mk-kubernetes-upgrade-755918
	I1019 17:18:36.210925  312205 main.go:143] libmachine: (kubernetes-upgrade-755918) reserving static IP address...
	I1019 17:18:36.211448  312205 main.go:143] libmachine: (kubernetes-upgrade-755918) DBG | found host DHCP lease matching {name: "kubernetes-upgrade-755918", mac: "52:54:00:93:e7:4d", ip: "192.168.50.129"} in network mk-kubernetes-upgrade-755918: {Iface:virbr2 ExpiryTime:2025-10-19 18:18:02 +0000 UTC Type:0 Mac:52:54:00:93:e7:4d Iaid: IPaddr:192.168.50.129 Prefix:24 Hostname:kubernetes-upgrade-755918 Clientid:01:52:54:00:93:e7:4d}
	I1019 17:18:36.211481  312205 main.go:143] libmachine: (kubernetes-upgrade-755918) DBG | skip adding static IP to network mk-kubernetes-upgrade-755918 - found existing host DHCP lease matching {name: "kubernetes-upgrade-755918", mac: "52:54:00:93:e7:4d", ip: "192.168.50.129"}
	I1019 17:18:36.211502  312205 main.go:143] libmachine: (kubernetes-upgrade-755918) reserved static IP address 192.168.50.129 for domain kubernetes-upgrade-755918
	I1019 17:18:36.211519  312205 main.go:143] libmachine: (kubernetes-upgrade-755918) waiting for SSH...
	I1019 17:18:36.211528  312205 main.go:143] libmachine: (kubernetes-upgrade-755918) DBG | Getting to WaitForSSH function...
	I1019 17:18:36.214189  312205 main.go:143] libmachine: (kubernetes-upgrade-755918) DBG | domain kubernetes-upgrade-755918 has defined MAC address 52:54:00:93:e7:4d in network mk-kubernetes-upgrade-755918
	I1019 17:18:36.214544  312205 main.go:143] libmachine: (kubernetes-upgrade-755918) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:e7:4d", ip: ""} in network mk-kubernetes-upgrade-755918: {Iface:virbr2 ExpiryTime:2025-10-19 18:18:02 +0000 UTC Type:0 Mac:52:54:00:93:e7:4d Iaid: IPaddr:192.168.50.129 Prefix:24 Hostname:kubernetes-upgrade-755918 Clientid:01:52:54:00:93:e7:4d}
	I1019 17:18:36.214575  312205 main.go:143] libmachine: (kubernetes-upgrade-755918) DBG | domain kubernetes-upgrade-755918 has defined IP address 192.168.50.129 and MAC address 52:54:00:93:e7:4d in network mk-kubernetes-upgrade-755918
	I1019 17:18:36.214780  312205 main.go:143] libmachine: (kubernetes-upgrade-755918) DBG | Using SSH client type: external
	I1019 17:18:36.214810  312205 main.go:143] libmachine: (kubernetes-upgrade-755918) DBG | Using SSH private key: /home/jenkins/minikube-integration/21683-274250/.minikube/machines/kubernetes-upgrade-755918/id_rsa (-rw-------)
	I1019 17:18:36.214849  312205 main.go:143] libmachine: (kubernetes-upgrade-755918) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.129 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/21683-274250/.minikube/machines/kubernetes-upgrade-755918/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1019 17:18:36.214884  312205 main.go:143] libmachine: (kubernetes-upgrade-755918) DBG | About to run SSH command:
	I1019 17:18:36.214924  312205 main.go:143] libmachine: (kubernetes-upgrade-755918) DBG | exit 0
	I1019 17:18:40.163219  312118 out.go:177] * Starting control plane node stopped-upgrade-254072 in cluster stopped-upgrade-254072
	I1019 17:18:40.164337  312118 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime crio
	I1019 17:18:40.269180  312118 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.3/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4
	I1019 17:18:40.269202  312118 cache.go:56] Caching tarball of preloaded images
	I1019 17:18:40.269358  312118 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime crio
	I1019 17:18:40.270897  312118 out.go:177] * Downloading Kubernetes v1.28.3 preload ...
	I1019 17:18:40.271889  312118 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4 ...
	I1019 17:18:40.384487  312118 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.3/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4?checksum=md5:6681d82b7b719ef3324102b709ec62eb -> /home/jenkins/minikube-integration/21683-274250/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4
	I1019 17:18:38.171635  311731 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1019 17:18:38.172188  311731 addons.go:515] duration metric: took 3.654089ms for enable addons: enabled=[]
	I1019 17:18:38.406341  311731 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1019 17:18:38.449239  311731 node_ready.go:35] waiting up to 6m0s for node "pause-046984" to be "Ready" ...
	I1019 17:18:38.455635  311731 node_ready.go:49] node "pause-046984" is "Ready"
	I1019 17:18:38.455695  311731 node_ready.go:38] duration metric: took 6.402162ms for node "pause-046984" to be "Ready" ...
	I1019 17:18:38.455719  311731 api_server.go:52] waiting for apiserver process to appear ...
	I1019 17:18:38.455789  311731 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1019 17:18:38.486052  311731 api_server.go:72] duration metric: took 317.530927ms to wait for apiserver process to appear ...
	I1019 17:18:38.486082  311731 api_server.go:88] waiting for apiserver healthz status ...
	I1019 17:18:38.486103  311731 api_server.go:253] Checking apiserver healthz at https://192.168.39.42:8443/healthz ...
	I1019 17:18:38.492335  311731 api_server.go:279] https://192.168.39.42:8443/healthz returned 200:
	ok
	I1019 17:18:38.494599  311731 api_server.go:141] control plane version: v1.34.1
	I1019 17:18:38.494625  311731 api_server.go:131] duration metric: took 8.534531ms to wait for apiserver health ...
	I1019 17:18:38.494635  311731 system_pods.go:43] waiting for kube-system pods to appear ...
	I1019 17:18:38.497591  311731 system_pods.go:59] 6 kube-system pods found
	I1019 17:18:38.497635  311731 system_pods.go:61] "coredns-66bc5c9577-z9rqv" [7655a35b-ffaf-424b-8a40-627a6a3e5b1e] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1019 17:18:38.497647  311731 system_pods.go:61] "etcd-pause-046984" [b9d1bfc4-d889-4919-8387-11ce6083bf8f] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1019 17:18:38.497657  311731 system_pods.go:61] "kube-apiserver-pause-046984" [d3ffb7b1-34e4-4e0f-88ea-20958de7b2fd] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1019 17:18:38.497671  311731 system_pods.go:61] "kube-controller-manager-pause-046984" [461316d6-bb1e-4450-b216-959f836a75fb] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1019 17:18:38.497680  311731 system_pods.go:61] "kube-proxy-mnsqf" [bcef04ef-3072-4b46-becb-1e7804e25d88] Running
	I1019 17:18:38.497691  311731 system_pods.go:61] "kube-scheduler-pause-046984" [5ac163ed-c77e-4b33-8743-2e16841ec8ce] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1019 17:18:38.497700  311731 system_pods.go:74] duration metric: took 3.057838ms to wait for pod list to return data ...
	I1019 17:18:38.497713  311731 default_sa.go:34] waiting for default service account to be created ...
	I1019 17:18:38.500082  311731 default_sa.go:45] found service account: "default"
	I1019 17:18:38.500104  311731 default_sa.go:55] duration metric: took 2.382893ms for default service account to be created ...
	I1019 17:18:38.500115  311731 system_pods.go:116] waiting for k8s-apps to be running ...
	I1019 17:18:38.503490  311731 system_pods.go:86] 6 kube-system pods found
	I1019 17:18:38.503534  311731 system_pods.go:89] "coredns-66bc5c9577-z9rqv" [7655a35b-ffaf-424b-8a40-627a6a3e5b1e] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1019 17:18:38.503548  311731 system_pods.go:89] "etcd-pause-046984" [b9d1bfc4-d889-4919-8387-11ce6083bf8f] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1019 17:18:38.503565  311731 system_pods.go:89] "kube-apiserver-pause-046984" [d3ffb7b1-34e4-4e0f-88ea-20958de7b2fd] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1019 17:18:38.503584  311731 system_pods.go:89] "kube-controller-manager-pause-046984" [461316d6-bb1e-4450-b216-959f836a75fb] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1019 17:18:38.503594  311731 system_pods.go:89] "kube-proxy-mnsqf" [bcef04ef-3072-4b46-becb-1e7804e25d88] Running
	I1019 17:18:38.503610  311731 system_pods.go:89] "kube-scheduler-pause-046984" [5ac163ed-c77e-4b33-8743-2e16841ec8ce] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1019 17:18:38.503623  311731 system_pods.go:126] duration metric: took 3.500524ms to wait for k8s-apps to be running ...
	I1019 17:18:38.503639  311731 system_svc.go:44] waiting for kubelet service to be running ....
	I1019 17:18:38.503696  311731 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1019 17:18:38.525624  311731 system_svc.go:56] duration metric: took 21.969985ms WaitForService to wait for kubelet
	I1019 17:18:38.525665  311731 kubeadm.go:587] duration metric: took 357.148406ms to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1019 17:18:38.525694  311731 node_conditions.go:102] verifying NodePressure condition ...
	I1019 17:18:38.531445  311731 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1019 17:18:38.531482  311731 node_conditions.go:123] node cpu capacity is 2
	I1019 17:18:38.531503  311731 node_conditions.go:105] duration metric: took 5.800475ms to run NodePressure ...
	I1019 17:18:38.531525  311731 start.go:242] waiting for startup goroutines ...
	I1019 17:18:38.531547  311731 start.go:247] waiting for cluster config update ...
	I1019 17:18:38.531565  311731 start.go:256] writing updated cluster config ...
	I1019 17:18:38.532105  311731 ssh_runner.go:195] Run: rm -f paused
	I1019 17:18:38.538231  311731 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1019 17:18:38.539028  311731 kapi.go:59] client config for pause-046984: &rest.Config{Host:"https://192.168.39.42:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21683-274250/.minikube/profiles/pause-046984/client.crt", KeyFile:"/home/jenkins/minikube-integration/21683-274250/.minikube/profiles/pause-046984/client.key", CAFile:"/home/jenkins/minikube-integration/21683-274250/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]
string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2819d20), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1019 17:18:38.542967  311731 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-z9rqv" in "kube-system" namespace to be "Ready" or be gone ...
	W1019 17:18:40.548664  311731 pod_ready.go:104] pod "coredns-66bc5c9577-z9rqv" is not "Ready", error: <nil>
	W1019 17:18:42.550405  311731 pod_ready.go:104] pod "coredns-66bc5c9577-z9rqv" is not "Ready", error: <nil>
	W1019 17:18:44.551384  311731 pod_ready.go:104] pod "coredns-66bc5c9577-z9rqv" is not "Ready", error: <nil>
	I1019 17:18:47.050067  311731 pod_ready.go:94] pod "coredns-66bc5c9577-z9rqv" is "Ready"
	I1019 17:18:47.050097  311731 pod_ready.go:86] duration metric: took 8.507099917s for pod "coredns-66bc5c9577-z9rqv" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 17:18:47.053018  311731 pod_ready.go:83] waiting for pod "etcd-pause-046984" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 17:18:47.057837  311731 pod_ready.go:94] pod "etcd-pause-046984" is "Ready"
	I1019 17:18:47.057869  311731 pod_ready.go:86] duration metric: took 4.823069ms for pod "etcd-pause-046984" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 17:18:47.059870  311731 pod_ready.go:83] waiting for pod "kube-apiserver-pause-046984" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 17:18:47.064362  311731 pod_ready.go:94] pod "kube-apiserver-pause-046984" is "Ready"
	I1019 17:18:47.064384  311731 pod_ready.go:86] duration metric: took 4.490903ms for pod "kube-apiserver-pause-046984" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 17:18:47.066629  311731 pod_ready.go:83] waiting for pod "kube-controller-manager-pause-046984" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 17:18:47.483538  312205 main.go:143] libmachine: (kubernetes-upgrade-755918) DBG | SSH cmd err, output: exit status 255: 
	I1019 17:18:47.483571  312205 main.go:143] libmachine: (kubernetes-upgrade-755918) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I1019 17:18:47.483620  312205 main.go:143] libmachine: (kubernetes-upgrade-755918) DBG | command : exit 0
	I1019 17:18:47.483649  312205 main.go:143] libmachine: (kubernetes-upgrade-755918) DBG | err     : exit status 255
	I1019 17:18:47.483664  312205 main.go:143] libmachine: (kubernetes-upgrade-755918) DBG | output  : 
	I1019 17:18:48.572484  311731 pod_ready.go:94] pod "kube-controller-manager-pause-046984" is "Ready"
	I1019 17:18:48.572524  311731 pod_ready.go:86] duration metric: took 1.505869409s for pod "kube-controller-manager-pause-046984" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 17:18:48.646772  311731 pod_ready.go:83] waiting for pod "kube-proxy-mnsqf" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 17:18:49.047261  311731 pod_ready.go:94] pod "kube-proxy-mnsqf" is "Ready"
	I1019 17:18:49.047286  311731 pod_ready.go:86] duration metric: took 400.478809ms for pod "kube-proxy-mnsqf" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 17:18:49.246736  311731 pod_ready.go:83] waiting for pod "kube-scheduler-pause-046984" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 17:18:51.252658  311731 pod_ready.go:94] pod "kube-scheduler-pause-046984" is "Ready"
	I1019 17:18:51.252685  311731 pod_ready.go:86] duration metric: took 2.005921951s for pod "kube-scheduler-pause-046984" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 17:18:51.252696  311731 pod_ready.go:40] duration metric: took 12.714424764s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1019 17:18:51.302454  311731 start.go:628] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1019 17:18:51.304006  311731 out.go:179] * Done! kubectl is now configured to use "pause-046984" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 19 17:18:52 pause-046984 crio[3072]: time="2025-10-19 17:18:52.005677047Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1760894332005650868,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:127412,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=eac92a63-c1d3-4864-aa5b-943fb79ef530 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 19 17:18:52 pause-046984 crio[3072]: time="2025-10-19 17:18:52.006389015Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d7ac2bf1-a0b6-4cb9-a839-eca6064b14d7 name=/runtime.v1.RuntimeService/ListContainers
	Oct 19 17:18:52 pause-046984 crio[3072]: time="2025-10-19 17:18:52.006790807Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d7ac2bf1-a0b6-4cb9-a839-eca6064b14d7 name=/runtime.v1.RuntimeService/ListContainers
	Oct 19 17:18:52 pause-046984 crio[3072]: time="2025-10-19 17:18:52.007186559Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0ff02ae97123ca4f73d1283f137ed862cb834f144462c564fcb415cd278d3b58,PodSandboxId:b96e941b999897db2610e09acf5adda1c456cbb72d35d1f8324de75c061f39a4,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1760894317488330888,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-z9rqv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7655a35b-ffaf-424b-8a40-627a6a3e5b1e,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\
":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:94f2efe6ef28c01e17e71f2cc08b9349261aa5fbce6d2218d30742353b1c38b7,PodSandboxId:e8ffffbd2e7e4dfb7739748d1e0d843b7e0ad47a6dbc08ef2b765d03ffaa3c91,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1760894317145523758,Labels:map[string]s
tring{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mnsqf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bcef04ef-3072-4b46-becb-1e7804e25d88,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f9bba59a447faf09e55815d1c234a15d16ee5af14510e96d8c6e9507cb394080,PodSandboxId:b0140c18f7525a5ef3e43b32377ee8328a2ebe041864828350777802d992fc9b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1760894313588306780,Labels:map[string]string{io.kubernetes.container
.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-046984,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15b92579863fcd14c69b4cf471043b4b,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8591c4aa8ae3f813a48e89256c784d44e6f9a5f1f2d52c969ac66cb87bbbbcfd,PodSandboxId:3356e8fb22fe4eb275eb832f2a5f0c2e91c375d4b96c2232c40dfface11fa0e3,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:C
ONTAINER_RUNNING,CreatedAt:1760894313593041404,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-046984,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a3b2d4b76f920242b3eeb72a31f4a5b7,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3d708e0655eeba34296a163b236c8ff279367b83662a25ffabc351462036ac99,PodSandboxId:46e746be980ce8b6fca698a4468e34a7ef4bf5af83343924502c75db0de51f7f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,Runtime
Handler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1760894313545826383,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-046984,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e6ce92c92bde0baf1df2a20ff7c90fc3,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:662c4140aef8fdbd0e6b33fa27779f3406d9fdf0ace1ee302499af23c64cd12d,PodSandboxId:d1cb507aaf891f7d62474813d15801193144b757e9b15fa2435f2698350fb764,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c
3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1760894313492105297,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-046984,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1cad97fa3410eba4e391c0d82f1c5537,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ca460e4a893443d5e31ba0c33acef332ba3e98264273cae75f510d535dd8de4,PodSandboxId:928128e8edf7991a6e5d8dbf87518385425b952ae27fa3
bd4bc44a6921316608,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1760894298448998948,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-z9rqv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7655a35b-ffaf-424b-8a40-627a6a3e5b1e,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubern
etes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b4d75c20f540e595b1556128fd9bdec9bb473b7879f38e05c37ddb5be92d5533,PodSandboxId:e43710c0abb0489f94c5bfe85ccbfba17a8ccf06991d48dd6b57df1a8b676480,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_EXITED,CreatedAt:1760894297521253631,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mnsqf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bcef04ef-3072-4b46-becb-1e7804e25d88,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 1,io.
kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d94639566f241bd36a3beb47c8dc56cbc896b815c911ed276556b64c475ca4f6,PodSandboxId:a8b14d2d9fc70ac60f891c9219c3ebbe0770912b66d1fa1a77da8466a508a236,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_EXITED,CreatedAt:1760894297418101760,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-046984,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15b92579863fcd14c69b4cf471043b4b,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hos
tPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ee3dc4c66b7c4d68572ee374d8c9eb458e56d7c7883b1bfa63c9da942534f88a,PodSandboxId:dfe5467e0dd20a89b03b764663999847a3d4a010e6bcf4c0eb9d864fa6c077d1,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_EXITED,CreatedAt:1760894297383867226,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-046984,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e6ce92c92bde0baf1df2a20ff7c90fc3,},
Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:180dd09c14df72f6831b149aba1874f3461a6782e41b9c9f7b85c35a6f96b5a9,PodSandboxId:be6128732155f99a7e06eb4edf937366e57bbf6214a5e7e9e790163ca935d2d0,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_EXITED,CreatedAt:1760894297275087476,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-046984,io.kubernetes.pod.namespa
ce: kube-system,io.kubernetes.pod.uid: a3b2d4b76f920242b3eeb72a31f4a5b7,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4944d9dc741ea8a621669129b6109f09dd4b8b5e7258461f2d839fd79cc8b72b,PodSandboxId:3c2ec2999d1ad3a28704fa5bbbd2da11e667e41d3f099a47c8f6be26078c23b2,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_EXITED,CreatedAt:1760894297209510021,Labels:map[string]string{io.kubernetes.contai
ner.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-046984,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1cad97fa3410eba4e391c0d82f1c5537,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d7ac2bf1-a0b6-4cb9-a839-eca6064b14d7 name=/runtime.v1.RuntimeService/ListContainers
	Oct 19 17:18:52 pause-046984 crio[3072]: time="2025-10-19 17:18:52.054642936Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=42c6476a-ef4b-44a3-a7ff-e88b376c3e9a name=/runtime.v1.RuntimeService/Version
	Oct 19 17:18:52 pause-046984 crio[3072]: time="2025-10-19 17:18:52.054822363Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=42c6476a-ef4b-44a3-a7ff-e88b376c3e9a name=/runtime.v1.RuntimeService/Version
	Oct 19 17:18:52 pause-046984 crio[3072]: time="2025-10-19 17:18:52.055972260Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=c3cebbea-3b36-4784-a6cb-fd4a85e06c61 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 19 17:18:52 pause-046984 crio[3072]: time="2025-10-19 17:18:52.056453160Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1760894332056429583,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:127412,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c3cebbea-3b36-4784-a6cb-fd4a85e06c61 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 19 17:18:52 pause-046984 crio[3072]: time="2025-10-19 17:18:52.056957628Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=95fe17b7-df5f-4f8b-b0c5-cfe38043e794 name=/runtime.v1.RuntimeService/ListContainers
	Oct 19 17:18:52 pause-046984 crio[3072]: time="2025-10-19 17:18:52.057040171Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=95fe17b7-df5f-4f8b-b0c5-cfe38043e794 name=/runtime.v1.RuntimeService/ListContainers
	Oct 19 17:18:52 pause-046984 crio[3072]: time="2025-10-19 17:18:52.057291459Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0ff02ae97123ca4f73d1283f137ed862cb834f144462c564fcb415cd278d3b58,PodSandboxId:b96e941b999897db2610e09acf5adda1c456cbb72d35d1f8324de75c061f39a4,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1760894317488330888,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-z9rqv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7655a35b-ffaf-424b-8a40-627a6a3e5b1e,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\
":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:94f2efe6ef28c01e17e71f2cc08b9349261aa5fbce6d2218d30742353b1c38b7,PodSandboxId:e8ffffbd2e7e4dfb7739748d1e0d843b7e0ad47a6dbc08ef2b765d03ffaa3c91,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1760894317145523758,Labels:map[string]s
tring{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mnsqf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bcef04ef-3072-4b46-becb-1e7804e25d88,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f9bba59a447faf09e55815d1c234a15d16ee5af14510e96d8c6e9507cb394080,PodSandboxId:b0140c18f7525a5ef3e43b32377ee8328a2ebe041864828350777802d992fc9b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1760894313588306780,Labels:map[string]string{io.kubernetes.container
.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-046984,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15b92579863fcd14c69b4cf471043b4b,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8591c4aa8ae3f813a48e89256c784d44e6f9a5f1f2d52c969ac66cb87bbbbcfd,PodSandboxId:3356e8fb22fe4eb275eb832f2a5f0c2e91c375d4b96c2232c40dfface11fa0e3,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:C
ONTAINER_RUNNING,CreatedAt:1760894313593041404,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-046984,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a3b2d4b76f920242b3eeb72a31f4a5b7,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3d708e0655eeba34296a163b236c8ff279367b83662a25ffabc351462036ac99,PodSandboxId:46e746be980ce8b6fca698a4468e34a7ef4bf5af83343924502c75db0de51f7f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,Runtime
Handler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1760894313545826383,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-046984,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e6ce92c92bde0baf1df2a20ff7c90fc3,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:662c4140aef8fdbd0e6b33fa27779f3406d9fdf0ace1ee302499af23c64cd12d,PodSandboxId:d1cb507aaf891f7d62474813d15801193144b757e9b15fa2435f2698350fb764,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c
3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1760894313492105297,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-046984,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1cad97fa3410eba4e391c0d82f1c5537,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ca460e4a893443d5e31ba0c33acef332ba3e98264273cae75f510d535dd8de4,PodSandboxId:928128e8edf7991a6e5d8dbf87518385425b952ae27fa3
bd4bc44a6921316608,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1760894298448998948,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-z9rqv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7655a35b-ffaf-424b-8a40-627a6a3e5b1e,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubern
etes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b4d75c20f540e595b1556128fd9bdec9bb473b7879f38e05c37ddb5be92d5533,PodSandboxId:e43710c0abb0489f94c5bfe85ccbfba17a8ccf06991d48dd6b57df1a8b676480,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_EXITED,CreatedAt:1760894297521253631,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mnsqf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bcef04ef-3072-4b46-becb-1e7804e25d88,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 1,io.
kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d94639566f241bd36a3beb47c8dc56cbc896b815c911ed276556b64c475ca4f6,PodSandboxId:a8b14d2d9fc70ac60f891c9219c3ebbe0770912b66d1fa1a77da8466a508a236,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_EXITED,CreatedAt:1760894297418101760,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-046984,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15b92579863fcd14c69b4cf471043b4b,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hos
tPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ee3dc4c66b7c4d68572ee374d8c9eb458e56d7c7883b1bfa63c9da942534f88a,PodSandboxId:dfe5467e0dd20a89b03b764663999847a3d4a010e6bcf4c0eb9d864fa6c077d1,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_EXITED,CreatedAt:1760894297383867226,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-046984,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e6ce92c92bde0baf1df2a20ff7c90fc3,},
Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:180dd09c14df72f6831b149aba1874f3461a6782e41b9c9f7b85c35a6f96b5a9,PodSandboxId:be6128732155f99a7e06eb4edf937366e57bbf6214a5e7e9e790163ca935d2d0,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_EXITED,CreatedAt:1760894297275087476,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-046984,io.kubernetes.pod.namespa
ce: kube-system,io.kubernetes.pod.uid: a3b2d4b76f920242b3eeb72a31f4a5b7,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4944d9dc741ea8a621669129b6109f09dd4b8b5e7258461f2d839fd79cc8b72b,PodSandboxId:3c2ec2999d1ad3a28704fa5bbbd2da11e667e41d3f099a47c8f6be26078c23b2,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_EXITED,CreatedAt:1760894297209510021,Labels:map[string]string{io.kubernetes.contai
ner.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-046984,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1cad97fa3410eba4e391c0d82f1c5537,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=95fe17b7-df5f-4f8b-b0c5-cfe38043e794 name=/runtime.v1.RuntimeService/ListContainers
	Oct 19 17:18:52 pause-046984 crio[3072]: time="2025-10-19 17:18:52.107459663Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=e197cdf9-45da-4830-8fc2-b997b5213943 name=/runtime.v1.RuntimeService/Version
	Oct 19 17:18:52 pause-046984 crio[3072]: time="2025-10-19 17:18:52.107803356Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e197cdf9-45da-4830-8fc2-b997b5213943 name=/runtime.v1.RuntimeService/Version
	Oct 19 17:18:52 pause-046984 crio[3072]: time="2025-10-19 17:18:52.109198635Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d10226e5-7117-45bb-b706-a8946643f293 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 19 17:18:52 pause-046984 crio[3072]: time="2025-10-19 17:18:52.110286009Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1760894332110253900,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:127412,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d10226e5-7117-45bb-b706-a8946643f293 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 19 17:18:52 pause-046984 crio[3072]: time="2025-10-19 17:18:52.111253858Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0140444b-e36d-4762-99d7-6e14ac9c0e50 name=/runtime.v1.RuntimeService/ListContainers
	Oct 19 17:18:52 pause-046984 crio[3072]: time="2025-10-19 17:18:52.111350123Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0140444b-e36d-4762-99d7-6e14ac9c0e50 name=/runtime.v1.RuntimeService/ListContainers
	Oct 19 17:18:52 pause-046984 crio[3072]: time="2025-10-19 17:18:52.111965274Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0ff02ae97123ca4f73d1283f137ed862cb834f144462c564fcb415cd278d3b58,PodSandboxId:b96e941b999897db2610e09acf5adda1c456cbb72d35d1f8324de75c061f39a4,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1760894317488330888,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-z9rqv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7655a35b-ffaf-424b-8a40-627a6a3e5b1e,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\
":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:94f2efe6ef28c01e17e71f2cc08b9349261aa5fbce6d2218d30742353b1c38b7,PodSandboxId:e8ffffbd2e7e4dfb7739748d1e0d843b7e0ad47a6dbc08ef2b765d03ffaa3c91,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1760894317145523758,Labels:map[string]s
tring{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mnsqf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bcef04ef-3072-4b46-becb-1e7804e25d88,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f9bba59a447faf09e55815d1c234a15d16ee5af14510e96d8c6e9507cb394080,PodSandboxId:b0140c18f7525a5ef3e43b32377ee8328a2ebe041864828350777802d992fc9b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1760894313588306780,Labels:map[string]string{io.kubernetes.container
.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-046984,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15b92579863fcd14c69b4cf471043b4b,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8591c4aa8ae3f813a48e89256c784d44e6f9a5f1f2d52c969ac66cb87bbbbcfd,PodSandboxId:3356e8fb22fe4eb275eb832f2a5f0c2e91c375d4b96c2232c40dfface11fa0e3,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:C
ONTAINER_RUNNING,CreatedAt:1760894313593041404,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-046984,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a3b2d4b76f920242b3eeb72a31f4a5b7,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3d708e0655eeba34296a163b236c8ff279367b83662a25ffabc351462036ac99,PodSandboxId:46e746be980ce8b6fca698a4468e34a7ef4bf5af83343924502c75db0de51f7f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,Runtime
Handler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1760894313545826383,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-046984,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e6ce92c92bde0baf1df2a20ff7c90fc3,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:662c4140aef8fdbd0e6b33fa27779f3406d9fdf0ace1ee302499af23c64cd12d,PodSandboxId:d1cb507aaf891f7d62474813d15801193144b757e9b15fa2435f2698350fb764,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c
3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1760894313492105297,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-046984,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1cad97fa3410eba4e391c0d82f1c5537,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ca460e4a893443d5e31ba0c33acef332ba3e98264273cae75f510d535dd8de4,PodSandboxId:928128e8edf7991a6e5d8dbf87518385425b952ae27fa3
bd4bc44a6921316608,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1760894298448998948,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-z9rqv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7655a35b-ffaf-424b-8a40-627a6a3e5b1e,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubern
etes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b4d75c20f540e595b1556128fd9bdec9bb473b7879f38e05c37ddb5be92d5533,PodSandboxId:e43710c0abb0489f94c5bfe85ccbfba17a8ccf06991d48dd6b57df1a8b676480,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_EXITED,CreatedAt:1760894297521253631,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mnsqf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bcef04ef-3072-4b46-becb-1e7804e25d88,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 1,io.
kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d94639566f241bd36a3beb47c8dc56cbc896b815c911ed276556b64c475ca4f6,PodSandboxId:a8b14d2d9fc70ac60f891c9219c3ebbe0770912b66d1fa1a77da8466a508a236,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_EXITED,CreatedAt:1760894297418101760,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-046984,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15b92579863fcd14c69b4cf471043b4b,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hos
tPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ee3dc4c66b7c4d68572ee374d8c9eb458e56d7c7883b1bfa63c9da942534f88a,PodSandboxId:dfe5467e0dd20a89b03b764663999847a3d4a010e6bcf4c0eb9d864fa6c077d1,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_EXITED,CreatedAt:1760894297383867226,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-046984,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e6ce92c92bde0baf1df2a20ff7c90fc3,},
Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:180dd09c14df72f6831b149aba1874f3461a6782e41b9c9f7b85c35a6f96b5a9,PodSandboxId:be6128732155f99a7e06eb4edf937366e57bbf6214a5e7e9e790163ca935d2d0,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_EXITED,CreatedAt:1760894297275087476,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-046984,io.kubernetes.pod.namespa
ce: kube-system,io.kubernetes.pod.uid: a3b2d4b76f920242b3eeb72a31f4a5b7,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4944d9dc741ea8a621669129b6109f09dd4b8b5e7258461f2d839fd79cc8b72b,PodSandboxId:3c2ec2999d1ad3a28704fa5bbbd2da11e667e41d3f099a47c8f6be26078c23b2,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_EXITED,CreatedAt:1760894297209510021,Labels:map[string]string{io.kubernetes.contai
ner.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-046984,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1cad97fa3410eba4e391c0d82f1c5537,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=0140444b-e36d-4762-99d7-6e14ac9c0e50 name=/runtime.v1.RuntimeService/ListContainers
	Oct 19 17:18:52 pause-046984 crio[3072]: time="2025-10-19 17:18:52.155799082Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=a40fb3fe-c100-4aca-892a-43dc21909d60 name=/runtime.v1.RuntimeService/Version
	Oct 19 17:18:52 pause-046984 crio[3072]: time="2025-10-19 17:18:52.155887761Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a40fb3fe-c100-4aca-892a-43dc21909d60 name=/runtime.v1.RuntimeService/Version
	Oct 19 17:18:52 pause-046984 crio[3072]: time="2025-10-19 17:18:52.156977002Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=21c2cd15-4b5d-4e4b-a2d5-86a731ed7e70 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 19 17:18:52 pause-046984 crio[3072]: time="2025-10-19 17:18:52.157334251Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1760894332157311856,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:127412,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=21c2cd15-4b5d-4e4b-a2d5-86a731ed7e70 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 19 17:18:52 pause-046984 crio[3072]: time="2025-10-19 17:18:52.157935825Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=cd8df13e-c483-49be-928c-89693d1748c7 name=/runtime.v1.RuntimeService/ListContainers
	Oct 19 17:18:52 pause-046984 crio[3072]: time="2025-10-19 17:18:52.158003192Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=cd8df13e-c483-49be-928c-89693d1748c7 name=/runtime.v1.RuntimeService/ListContainers
	Oct 19 17:18:52 pause-046984 crio[3072]: time="2025-10-19 17:18:52.158267724Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0ff02ae97123ca4f73d1283f137ed862cb834f144462c564fcb415cd278d3b58,PodSandboxId:b96e941b999897db2610e09acf5adda1c456cbb72d35d1f8324de75c061f39a4,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1760894317488330888,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-z9rqv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7655a35b-ffaf-424b-8a40-627a6a3e5b1e,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\
":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:94f2efe6ef28c01e17e71f2cc08b9349261aa5fbce6d2218d30742353b1c38b7,PodSandboxId:e8ffffbd2e7e4dfb7739748d1e0d843b7e0ad47a6dbc08ef2b765d03ffaa3c91,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1760894317145523758,Labels:map[string]s
tring{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mnsqf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bcef04ef-3072-4b46-becb-1e7804e25d88,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f9bba59a447faf09e55815d1c234a15d16ee5af14510e96d8c6e9507cb394080,PodSandboxId:b0140c18f7525a5ef3e43b32377ee8328a2ebe041864828350777802d992fc9b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1760894313588306780,Labels:map[string]string{io.kubernetes.container
.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-046984,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15b92579863fcd14c69b4cf471043b4b,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8591c4aa8ae3f813a48e89256c784d44e6f9a5f1f2d52c969ac66cb87bbbbcfd,PodSandboxId:3356e8fb22fe4eb275eb832f2a5f0c2e91c375d4b96c2232c40dfface11fa0e3,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:C
ONTAINER_RUNNING,CreatedAt:1760894313593041404,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-046984,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a3b2d4b76f920242b3eeb72a31f4a5b7,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3d708e0655eeba34296a163b236c8ff279367b83662a25ffabc351462036ac99,PodSandboxId:46e746be980ce8b6fca698a4468e34a7ef4bf5af83343924502c75db0de51f7f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,Runtime
Handler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1760894313545826383,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-046984,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e6ce92c92bde0baf1df2a20ff7c90fc3,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:662c4140aef8fdbd0e6b33fa27779f3406d9fdf0ace1ee302499af23c64cd12d,PodSandboxId:d1cb507aaf891f7d62474813d15801193144b757e9b15fa2435f2698350fb764,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c
3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1760894313492105297,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-046984,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1cad97fa3410eba4e391c0d82f1c5537,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ca460e4a893443d5e31ba0c33acef332ba3e98264273cae75f510d535dd8de4,PodSandboxId:928128e8edf7991a6e5d8dbf87518385425b952ae27fa3
bd4bc44a6921316608,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1760894298448998948,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-z9rqv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7655a35b-ffaf-424b-8a40-627a6a3e5b1e,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubern
etes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b4d75c20f540e595b1556128fd9bdec9bb473b7879f38e05c37ddb5be92d5533,PodSandboxId:e43710c0abb0489f94c5bfe85ccbfba17a8ccf06991d48dd6b57df1a8b676480,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_EXITED,CreatedAt:1760894297521253631,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mnsqf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bcef04ef-3072-4b46-becb-1e7804e25d88,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 1,io.
kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d94639566f241bd36a3beb47c8dc56cbc896b815c911ed276556b64c475ca4f6,PodSandboxId:a8b14d2d9fc70ac60f891c9219c3ebbe0770912b66d1fa1a77da8466a508a236,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_EXITED,CreatedAt:1760894297418101760,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-046984,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15b92579863fcd14c69b4cf471043b4b,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hos
tPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ee3dc4c66b7c4d68572ee374d8c9eb458e56d7c7883b1bfa63c9da942534f88a,PodSandboxId:dfe5467e0dd20a89b03b764663999847a3d4a010e6bcf4c0eb9d864fa6c077d1,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_EXITED,CreatedAt:1760894297383867226,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-046984,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e6ce92c92bde0baf1df2a20ff7c90fc3,},
Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:180dd09c14df72f6831b149aba1874f3461a6782e41b9c9f7b85c35a6f96b5a9,PodSandboxId:be6128732155f99a7e06eb4edf937366e57bbf6214a5e7e9e790163ca935d2d0,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_EXITED,CreatedAt:1760894297275087476,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-046984,io.kubernetes.pod.namespa
ce: kube-system,io.kubernetes.pod.uid: a3b2d4b76f920242b3eeb72a31f4a5b7,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4944d9dc741ea8a621669129b6109f09dd4b8b5e7258461f2d839fd79cc8b72b,PodSandboxId:3c2ec2999d1ad3a28704fa5bbbd2da11e667e41d3f099a47c8f6be26078c23b2,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_EXITED,CreatedAt:1760894297209510021,Labels:map[string]string{io.kubernetes.contai
ner.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-046984,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1cad97fa3410eba4e391c0d82f1c5537,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=cd8df13e-c483-49be-928c-89693d1748c7 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	0ff02ae97123c       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969   14 seconds ago      Running             coredns                   2                   b96e941b99989       coredns-66bc5c9577-z9rqv
	94f2efe6ef28c       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7   15 seconds ago      Running             kube-proxy                2                   e8ffffbd2e7e4       kube-proxy-mnsqf
	8591c4aa8ae3f       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115   18 seconds ago      Running             etcd                      2                   3356e8fb22fe4       etcd-pause-046984
	f9bba59a447fa       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813   18 seconds ago      Running             kube-scheduler            2                   b0140c18f7525       kube-scheduler-pause-046984
	3d708e0655eeb       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f   18 seconds ago      Running             kube-controller-manager   2                   46e746be980ce       kube-controller-manager-pause-046984
	662c4140aef8f       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97   18 seconds ago      Running             kube-apiserver            2                   d1cb507aaf891       kube-apiserver-pause-046984
	6ca460e4a8934       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969   33 seconds ago      Exited              coredns                   1                   928128e8edf79       coredns-66bc5c9577-z9rqv
	b4d75c20f540e       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7   34 seconds ago      Exited              kube-proxy                1                   e43710c0abb04       kube-proxy-mnsqf
	d94639566f241       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813   34 seconds ago      Exited              kube-scheduler            1                   a8b14d2d9fc70       kube-scheduler-pause-046984
	ee3dc4c66b7c4       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f   34 seconds ago      Exited              kube-controller-manager   1                   dfe5467e0dd20       kube-controller-manager-pause-046984
	180dd09c14df7       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115   34 seconds ago      Exited              etcd                      1                   be6128732155f       etcd-pause-046984
	4944d9dc741ea       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97   35 seconds ago      Exited              kube-apiserver            1                   3c2ec2999d1ad       kube-apiserver-pause-046984
	
	
	==> coredns [0ff02ae97123ca4f73d1283f137ed862cb834f144462c564fcb415cd278d3b58] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:39663 - 6314 "HINFO IN 27294165726923413.8170061678603372968. udp 55 false 512" NXDOMAIN qr,rd,ra 130 0.022674315s
	
	
	==> coredns [6ca460e4a893443d5e31ba0c33acef332ba3e98264273cae75f510d535dd8de4] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] 127.0.0.1:57460 - 64836 "HINFO IN 7306454367616908346.6183267044183509629. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.031499644s
	
	
	==> describe nodes <==
	Name:               pause-046984
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-046984
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e20882874ea1ac33516421f13ca0f7def6fb6b34
	                    minikube.k8s.io/name=pause-046984
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_19T17_17_06_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 19 Oct 2025 17:17:03 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-046984
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 19 Oct 2025 17:18:46 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 19 Oct 2025 17:18:36 +0000   Sun, 19 Oct 2025 17:17:01 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 19 Oct 2025 17:18:36 +0000   Sun, 19 Oct 2025 17:17:01 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 19 Oct 2025 17:18:36 +0000   Sun, 19 Oct 2025 17:17:01 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 19 Oct 2025 17:18:36 +0000   Sun, 19 Oct 2025 17:17:07 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.42
	  Hostname:    pause-046984
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3042712Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3042712Ki
	  pods:               110
	System Info:
	  Machine ID:                 07983801304e404a9288c3d4b9f00792
	  System UUID:                07983801-304e-404a-9288-c3d4b9f00792
	  Boot ID:                    bae3f1af-9538-430a-8f8b-084f8ef83f04
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-z9rqv                100m (5%)     0 (0%)      70Mi (2%)        170Mi (5%)     100s
	  kube-system                 etcd-pause-046984                       100m (5%)     0 (0%)      100Mi (3%)       0 (0%)         106s
	  kube-system                 kube-apiserver-pause-046984             250m (12%)    0 (0%)      0 (0%)           0 (0%)         108s
	  kube-system                 kube-controller-manager-pause-046984    200m (10%)    0 (0%)      0 (0%)           0 (0%)         106s
	  kube-system                 kube-proxy-mnsqf                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         100s
	  kube-system                 kube-scheduler-pause-046984             100m (5%)     0 (0%)      0 (0%)           0 (0%)         106s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (5%)  170Mi (5%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 97s                kube-proxy       
	  Normal  Starting                 14s                kube-proxy       
	  Normal  NodeHasSufficientPID     106s               kubelet          Node pause-046984 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  106s               kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  106s               kubelet          Node pause-046984 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    106s               kubelet          Node pause-046984 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 106s               kubelet          Starting kubelet.
	  Normal  NodeReady                105s               kubelet          Node pause-046984 status is now: NodeReady
	  Normal  RegisteredNode           101s               node-controller  Node pause-046984 event: Registered Node pause-046984 in Controller
	  Normal  Starting                 20s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  20s (x8 over 20s)  kubelet          Node pause-046984 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    20s (x8 over 20s)  kubelet          Node pause-046984 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     20s (x7 over 20s)  kubelet          Node pause-046984 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  20s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           13s                node-controller  Node pause-046984 event: Registered Node pause-046984 in Controller
	
	
	==> dmesg <==
	[Oct19 17:16] Booted with the nomodeset parameter. Only the system framebuffer will be available
	[  +0.000007] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.000055] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +0.003656] (rpcbind)[119]: rpcbind.service: Referenced but unset environment variable evaluates to an empty string: RPCBIND_OPTIONS
	[  +1.166458] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000019] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.080988] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.099696] kauditd_printk_skb: 102 callbacks suppressed
	[Oct19 17:17] kauditd_printk_skb: 171 callbacks suppressed
	[  +1.148744] kauditd_printk_skb: 18 callbacks suppressed
	[ +42.333161] kauditd_printk_skb: 184 callbacks suppressed
	[Oct19 17:18] kauditd_printk_skb: 275 callbacks suppressed
	[  +1.424770] kauditd_printk_skb: 185 callbacks suppressed
	[  +1.841357] kauditd_printk_skb: 98 callbacks suppressed
	
	
	==> etcd [180dd09c14df72f6831b149aba1874f3461a6782e41b9c9f7b85c35a6f96b5a9] <==
	{"level":"info","ts":"2025-10-19T17:18:18.905564Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.42:2379"}
	{"level":"warn","ts":"2025-10-19T17:18:18.960032Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39684","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:18:18.984634Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39704","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-19T17:18:18.991462Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-10-19T17:18:18.991532Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"pause-046984","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.42:2380"],"advertise-client-urls":["https://192.168.39.42:2379"]}
	{"level":"warn","ts":"2025-10-19T17:18:18.991673Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39664","server-name":"","error":"read tcp 127.0.0.1:2379->127.0.0.1:39664: use of closed network connection"}
	{"level":"warn","ts":"2025-10-19T17:18:18.991737Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39688","server-name":"","error":"read tcp 127.0.0.1:2379->127.0.0.1:39688: use of closed network connection"}
	2025/10/19 17:18:18 WARNING: [core] [Channel #2 SubChannel #5]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "error reading server preface: read tcp 127.0.0.1:39664->127.0.0.1:2379: read: connection reset by peer"
	{"level":"warn","ts":"2025-10-19T17:18:18.997704Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39720","server-name":"","error":"write tcp 127.0.0.1:2379->127.0.0.1:39720: use of closed network connection"}
	{"level":"error","ts":"2025-10-19T17:18:18.999852Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-10-19T17:18:19.002002Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-10-19T17:18:19.002094Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-19T17:18:19.002115Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"be5e8f7004ae306c","current-leader-member-id":"be5e8f7004ae306c"}
	{"level":"warn","ts":"2025-10-19T17:18:19.002149Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-10-19T17:18:19.002204Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-10-19T17:18:19.002212Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-19T17:18:19.002213Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"info","ts":"2025-10-19T17:18:19.002233Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"warn","ts":"2025-10-19T17:18:19.002264Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.42:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-10-19T17:18:19.002273Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.42:2379: use of closed network connection"}
	{"level":"error","ts":"2025-10-19T17:18:19.002279Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.39.42:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-19T17:18:19.005375Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.39.42:2380"}
	{"level":"error","ts":"2025-10-19T17:18:19.005440Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.39.42:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-19T17:18:19.005477Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.39.42:2380"}
	{"level":"info","ts":"2025-10-19T17:18:19.005493Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"pause-046984","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.42:2380"],"advertise-client-urls":["https://192.168.39.42:2379"]}
	
	
	==> etcd [8591c4aa8ae3f813a48e89256c784d44e6f9a5f1f2d52c969ac66cb87bbbbcfd] <==
	{"level":"warn","ts":"2025-10-19T17:18:35.116803Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37080","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:18:35.159340Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37088","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:18:35.173608Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37100","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:18:35.197131Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37116","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:18:35.221659Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37130","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:18:35.246575Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37156","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:18:35.255561Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37164","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:18:35.264533Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37180","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:18:35.274704Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37194","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:18:35.292907Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37200","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:18:35.317213Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37220","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:18:35.329085Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37246","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:18:35.334636Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37274","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:18:35.344467Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37284","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:18:35.365156Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37306","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:18:35.377406Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37328","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:18:35.386173Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37344","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:18:35.398007Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37370","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:18:35.422402Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37380","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:18:35.429647Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37406","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:18:35.445368Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37420","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:18:35.462202Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37436","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:18:35.475835Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37444","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:18:35.490964Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37458","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:18:35.586083Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37486","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 17:18:52 up 2 min,  0 users,  load average: 1.14, 0.40, 0.15
	Linux pause-046984 6.6.95 #1 SMP PREEMPT_DYNAMIC Thu Oct 16 13:22:30 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [4944d9dc741ea8a621669129b6109f09dd4b8b5e7258461f2d839fd79cc8b72b] <==
	I1019 17:18:17.674383       1 options.go:263] external host was not specified, using 192.168.39.42
	I1019 17:18:17.685587       1 server.go:150] Version: v1.34.1
	I1019 17:18:17.686506       1 server.go:152] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	W1019 17:18:18.838318       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=resource.k8s.io/v1alpha3
	W1019 17:18:18.839493       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=admissionregistration.k8s.io/v1alpha1
	W1019 17:18:18.839550       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=internal.apiserver.k8s.io/v1alpha1
	W1019 17:18:18.839568       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=coordination.k8s.io/v1alpha2
	W1019 17:18:18.839583       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=scheduling.k8s.io/v1alpha1
	W1019 17:18:18.839597       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=authentication.k8s.io/v1alpha1
	W1019 17:18:18.839610       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=imagepolicy.k8s.io/v1alpha1
	W1019 17:18:18.839624       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=storage.k8s.io/v1alpha1
	W1019 17:18:18.839638       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=rbac.authorization.k8s.io/v1alpha1
	W1019 17:18:18.839652       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=storagemigration.k8s.io/v1alpha1
	W1019 17:18:18.839665       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=certificates.k8s.io/v1alpha1
	W1019 17:18:18.839679       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=node.k8s.io/v1alpha1
	W1019 17:18:18.948563       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1019 17:18:18.959533       1 logging.go:55] [core] [Channel #2 SubChannel #5]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	I1019 17:18:18.960571       1 shared_informer.go:349] "Waiting for caches to sync" controller="node_authorizer"
	
	
	==> kube-apiserver [662c4140aef8fdbd0e6b33fa27779f3406d9fdf0ace1ee302499af23c64cd12d] <==
	I1019 17:18:36.452166       1 aggregator.go:171] initial CRD sync complete...
	I1019 17:18:36.452764       1 autoregister_controller.go:144] Starting autoregister controller
	I1019 17:18:36.452818       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1019 17:18:36.452844       1 cache.go:39] Caches are synced for autoregister controller
	I1019 17:18:36.465003       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1019 17:18:36.465065       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1019 17:18:36.465099       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1019 17:18:36.465116       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1019 17:18:36.465317       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1019 17:18:36.465325       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1019 17:18:36.465399       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1019 17:18:36.481108       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1019 17:18:36.487031       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1019 17:18:36.487431       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1019 17:18:36.498246       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1019 17:18:36.505929       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1019 17:18:36.797984       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1019 17:18:37.286455       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1019 17:18:37.987790       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1019 17:18:38.033019       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1019 17:18:38.068132       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1019 17:18:38.077320       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1019 17:18:40.093471       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1019 17:18:40.143380       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1019 17:18:40.189879       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [3d708e0655eeba34296a163b236c8ff279367b83662a25ffabc351462036ac99] <==
	I1019 17:18:39.809443       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1019 17:18:39.810641       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1019 17:18:39.811816       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1019 17:18:39.811887       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1019 17:18:39.815109       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1019 17:18:39.817376       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1019 17:18:39.817409       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1019 17:18:39.821697       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1019 17:18:39.824227       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1019 17:18:39.825573       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1019 17:18:39.825676       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1019 17:18:39.833144       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1019 17:18:39.835451       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1019 17:18:39.835703       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1019 17:18:39.835478       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1019 17:18:39.835773       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1019 17:18:39.835904       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1019 17:18:39.835984       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1019 17:18:39.836068       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="pause-046984"
	I1019 17:18:39.836113       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1019 17:18:39.836503       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1019 17:18:39.842584       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1019 17:18:39.846930       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1019 17:18:39.849160       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1019 17:18:39.858446       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	
	
	==> kube-controller-manager [ee3dc4c66b7c4d68572ee374d8c9eb458e56d7c7883b1bfa63c9da942534f88a] <==
	
	
	==> kube-proxy [94f2efe6ef28c01e17e71f2cc08b9349261aa5fbce6d2218d30742353b1c38b7] <==
	I1019 17:18:37.406801       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1019 17:18:37.508251       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1019 17:18:37.508397       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.42"]
	E1019 17:18:37.508629       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1019 17:18:37.583501       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1019 17:18:37.583560       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1019 17:18:37.583581       1 server_linux.go:132] "Using iptables Proxier"
	I1019 17:18:37.608482       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1019 17:18:37.608706       1 server.go:527] "Version info" version="v1.34.1"
	I1019 17:18:37.609915       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1019 17:18:37.622227       1 config.go:200] "Starting service config controller"
	I1019 17:18:37.622264       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1019 17:18:37.622285       1 config.go:106] "Starting endpoint slice config controller"
	I1019 17:18:37.622290       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1019 17:18:37.622307       1 config.go:403] "Starting serviceCIDR config controller"
	I1019 17:18:37.622313       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1019 17:18:37.623369       1 config.go:309] "Starting node config controller"
	I1019 17:18:37.623743       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1019 17:18:37.722802       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1019 17:18:37.722934       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1019 17:18:37.723248       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1019 17:18:37.724676       1 shared_informer.go:356] "Caches are synced" controller="node config"
	
	
	==> kube-proxy [b4d75c20f540e595b1556128fd9bdec9bb473b7879f38e05c37ddb5be92d5533] <==
	
	
	==> kube-scheduler [d94639566f241bd36a3beb47c8dc56cbc896b815c911ed276556b64c475ca4f6] <==
	
	
	==> kube-scheduler [f9bba59a447faf09e55815d1c234a15d16ee5af14510e96d8c6e9507cb394080] <==
	I1019 17:18:34.997116       1 serving.go:386] Generated self-signed cert in-memory
	I1019 17:18:36.824642       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1019 17:18:36.824673       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1019 17:18:36.831405       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1019 17:18:36.831428       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1019 17:18:36.831559       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1019 17:18:36.831673       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1019 17:18:36.831706       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1019 17:18:36.831782       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1019 17:18:36.833042       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1019 17:18:36.833140       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1019 17:18:36.931700       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1019 17:18:36.931896       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1019 17:18:36.931993       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 19 17:18:36 pause-046984 kubelet[3534]: I1019 17:18:36.526964    3534 kubelet_node_status.go:124] "Node was previously registered" node="pause-046984"
	Oct 19 17:18:36 pause-046984 kubelet[3534]: I1019 17:18:36.527062    3534 kubelet_node_status.go:78] "Successfully registered node" node="pause-046984"
	Oct 19 17:18:36 pause-046984 kubelet[3534]: I1019 17:18:36.527084    3534 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Oct 19 17:18:36 pause-046984 kubelet[3534]: I1019 17:18:36.527912    3534 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Oct 19 17:18:36 pause-046984 kubelet[3534]: E1019 17:18:36.547059    3534 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-pause-046984\" already exists" pod="kube-system/kube-controller-manager-pause-046984"
	Oct 19 17:18:36 pause-046984 kubelet[3534]: I1019 17:18:36.547110    3534 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-pause-046984"
	Oct 19 17:18:36 pause-046984 kubelet[3534]: E1019 17:18:36.557946    3534 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-pause-046984\" already exists" pod="kube-system/kube-scheduler-pause-046984"
	Oct 19 17:18:36 pause-046984 kubelet[3534]: I1019 17:18:36.557987    3534 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/etcd-pause-046984"
	Oct 19 17:18:36 pause-046984 kubelet[3534]: E1019 17:18:36.574803    3534 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-pause-046984\" already exists" pod="kube-system/etcd-pause-046984"
	Oct 19 17:18:36 pause-046984 kubelet[3534]: I1019 17:18:36.574832    3534 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-pause-046984"
	Oct 19 17:18:36 pause-046984 kubelet[3534]: E1019 17:18:36.585895    3534 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-pause-046984\" already exists" pod="kube-system/kube-apiserver-pause-046984"
	Oct 19 17:18:36 pause-046984 kubelet[3534]: I1019 17:18:36.681281    3534 apiserver.go:52] "Watching apiserver"
	Oct 19 17:18:36 pause-046984 kubelet[3534]: I1019 17:18:36.704931    3534 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Oct 19 17:18:36 pause-046984 kubelet[3534]: I1019 17:18:36.793588    3534 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/bcef04ef-3072-4b46-becb-1e7804e25d88-xtables-lock\") pod \"kube-proxy-mnsqf\" (UID: \"bcef04ef-3072-4b46-becb-1e7804e25d88\") " pod="kube-system/kube-proxy-mnsqf"
	Oct 19 17:18:36 pause-046984 kubelet[3534]: I1019 17:18:36.793700    3534 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bcef04ef-3072-4b46-becb-1e7804e25d88-lib-modules\") pod \"kube-proxy-mnsqf\" (UID: \"bcef04ef-3072-4b46-becb-1e7804e25d88\") " pod="kube-system/kube-proxy-mnsqf"
	Oct 19 17:18:37 pause-046984 kubelet[3534]: I1019 17:18:37.188430    3534 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-pause-046984"
	Oct 19 17:18:37 pause-046984 kubelet[3534]: I1019 17:18:37.190400    3534 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-pause-046984"
	Oct 19 17:18:37 pause-046984 kubelet[3534]: E1019 17:18:37.213926    3534 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-pause-046984\" already exists" pod="kube-system/kube-scheduler-pause-046984"
	Oct 19 17:18:37 pause-046984 kubelet[3534]: E1019 17:18:37.221123    3534 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-pause-046984\" already exists" pod="kube-system/kube-apiserver-pause-046984"
	Oct 19 17:18:39 pause-046984 kubelet[3534]: I1019 17:18:39.224767    3534 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Oct 19 17:18:42 pause-046984 kubelet[3534]: E1019 17:18:42.853552    3534 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1760894322852373187  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:127412}  inodes_used:{value:57}}"
	Oct 19 17:18:42 pause-046984 kubelet[3534]: E1019 17:18:42.853584    3534 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1760894322852373187  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:127412}  inodes_used:{value:57}}"
	Oct 19 17:18:46 pause-046984 kubelet[3534]: I1019 17:18:46.806107    3534 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Oct 19 17:18:52 pause-046984 kubelet[3534]: E1019 17:18:52.855000    3534 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1760894332854520008  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:127412}  inodes_used:{value:57}}"
	Oct 19 17:18:52 pause-046984 kubelet[3534]: E1019 17:18:52.855053    3534 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1760894332854520008  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:127412}  inodes_used:{value:57}}"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-046984 -n pause-046984
helpers_test.go:269: (dbg) Run:  kubectl --context pause-046984 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-046984 -n pause-046984
helpers_test.go:252: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p pause-046984 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p pause-046984 logs -n 25: (1.643986277s)
helpers_test.go:260: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                ARGS                                                                                │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image   │ test-preload-360119 image list                                                                                                                                     │ test-preload-360119       │ jenkins │ v1.37.0 │ 19 Oct 25 17:14 UTC │ 19 Oct 25 17:14 UTC │
	│ delete  │ -p test-preload-360119                                                                                                                                             │ test-preload-360119       │ jenkins │ v1.37.0 │ 19 Oct 25 17:14 UTC │ 19 Oct 25 17:14 UTC │
	│ start   │ -p scheduled-stop-593188 --memory=3072 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                                         │ scheduled-stop-593188     │ jenkins │ v1.37.0 │ 19 Oct 25 17:14 UTC │ 19 Oct 25 17:15 UTC │
	│ stop    │ -p scheduled-stop-593188 --schedule 5m                                                                                                                             │ scheduled-stop-593188     │ jenkins │ v1.37.0 │ 19 Oct 25 17:15 UTC │                     │
	│ stop    │ -p scheduled-stop-593188 --schedule 5m                                                                                                                             │ scheduled-stop-593188     │ jenkins │ v1.37.0 │ 19 Oct 25 17:15 UTC │                     │
	│ stop    │ -p scheduled-stop-593188 --schedule 5m                                                                                                                             │ scheduled-stop-593188     │ jenkins │ v1.37.0 │ 19 Oct 25 17:15 UTC │                     │
	│ stop    │ -p scheduled-stop-593188 --schedule 15s                                                                                                                            │ scheduled-stop-593188     │ jenkins │ v1.37.0 │ 19 Oct 25 17:15 UTC │                     │
	│ stop    │ -p scheduled-stop-593188 --schedule 15s                                                                                                                            │ scheduled-stop-593188     │ jenkins │ v1.37.0 │ 19 Oct 25 17:15 UTC │                     │
	│ stop    │ -p scheduled-stop-593188 --schedule 15s                                                                                                                            │ scheduled-stop-593188     │ jenkins │ v1.37.0 │ 19 Oct 25 17:15 UTC │                     │
	│ stop    │ -p scheduled-stop-593188 --cancel-scheduled                                                                                                                        │ scheduled-stop-593188     │ jenkins │ v1.37.0 │ 19 Oct 25 17:15 UTC │ 19 Oct 25 17:15 UTC │
	│ stop    │ -p scheduled-stop-593188 --schedule 15s                                                                                                                            │ scheduled-stop-593188     │ jenkins │ v1.37.0 │ 19 Oct 25 17:15 UTC │                     │
	│ stop    │ -p scheduled-stop-593188 --schedule 15s                                                                                                                            │ scheduled-stop-593188     │ jenkins │ v1.37.0 │ 19 Oct 25 17:15 UTC │                     │
	│ stop    │ -p scheduled-stop-593188 --schedule 15s                                                                                                                            │ scheduled-stop-593188     │ jenkins │ v1.37.0 │ 19 Oct 25 17:15 UTC │ 19 Oct 25 17:16 UTC │
	│ delete  │ -p scheduled-stop-593188                                                                                                                                           │ scheduled-stop-593188     │ jenkins │ v1.37.0 │ 19 Oct 25 17:16 UTC │ 19 Oct 25 17:16 UTC │
	│ start   │ -p pause-046984 --memory=3072 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                │ pause-046984              │ jenkins │ v1.37.0 │ 19 Oct 25 17:16 UTC │ 19 Oct 25 17:17 UTC │
	│ start   │ -p force-systemd-env-064535 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                               │ force-systemd-env-064535  │ jenkins │ v1.37.0 │ 19 Oct 25 17:16 UTC │ 19 Oct 25 17:17 UTC │
	│ start   │ -p offline-crio-033291 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                        │ offline-crio-033291       │ jenkins │ v1.37.0 │ 19 Oct 25 17:16 UTC │ 19 Oct 25 17:18 UTC │
	│ start   │ -p cert-expiration-067580 --memory=3072 --cert-expiration=3m --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                   │ cert-expiration-067580    │ jenkins │ v1.37.0 │ 19 Oct 25 17:16 UTC │ 19 Oct 25 17:18 UTC │
	│ delete  │ -p force-systemd-env-064535                                                                                                                                        │ force-systemd-env-064535  │ jenkins │ v1.37.0 │ 19 Oct 25 17:17 UTC │ 19 Oct 25 17:17 UTC │
	│ start   │ -p kubernetes-upgrade-755918 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false │ kubernetes-upgrade-755918 │ jenkins │ v1.37.0 │ 19 Oct 25 17:17 UTC │ 19 Oct 25 17:18 UTC │
	│ start   │ -p pause-046984 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                                         │ pause-046984              │ jenkins │ v1.37.0 │ 19 Oct 25 17:17 UTC │ 19 Oct 25 17:18 UTC │
	│ delete  │ -p offline-crio-033291                                                                                                                                             │ offline-crio-033291       │ jenkins │ v1.37.0 │ 19 Oct 25 17:18 UTC │ 19 Oct 25 17:18 UTC │
	│ start   │ -p stopped-upgrade-254072 --memory=3072 --vm-driver=kvm2  --container-runtime=crio --auto-update-drivers=false                                                     │ stopped-upgrade-254072    │ jenkins │ v1.32.0 │ 19 Oct 25 17:18 UTC │                     │
	│ stop    │ -p kubernetes-upgrade-755918                                                                                                                                       │ kubernetes-upgrade-755918 │ jenkins │ v1.37.0 │ 19 Oct 25 17:18 UTC │ 19 Oct 25 17:18 UTC │
	│ start   │ -p kubernetes-upgrade-755918 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false │ kubernetes-upgrade-755918 │ jenkins │ v1.37.0 │ 19 Oct 25 17:18 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/19 17:18:34
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1019 17:18:34.554242  312205 out.go:360] Setting OutFile to fd 1 ...
	I1019 17:18:34.554340  312205 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 17:18:34.554345  312205 out.go:374] Setting ErrFile to fd 2...
	I1019 17:18:34.554348  312205 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 17:18:34.554687  312205 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21683-274250/.minikube/bin
	I1019 17:18:34.555209  312205 out.go:368] Setting JSON to false
	I1019 17:18:34.556150  312205 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":10857,"bootTime":1760883458,"procs":197,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1019 17:18:34.556346  312205 start.go:143] virtualization: kvm guest
	I1019 17:18:34.559166  312205 out.go:179] * [kubernetes-upgrade-755918] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1019 17:18:34.560411  312205 notify.go:221] Checking for updates...
	I1019 17:18:34.560438  312205 out.go:179]   - MINIKUBE_LOCATION=21683
	I1019 17:18:34.561638  312205 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1019 17:18:34.562761  312205 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21683-274250/kubeconfig
	I1019 17:18:34.563846  312205 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21683-274250/.minikube
	I1019 17:18:34.564805  312205 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1019 17:18:34.565751  312205 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1019 17:18:34.567043  312205 config.go:182] Loaded profile config "kubernetes-upgrade-755918": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1019 17:18:34.567649  312205 main.go:143] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1019 17:18:34.567703  312205 main.go:143] libmachine: Launching plugin server for driver kvm2
	I1019 17:18:34.582491  312205 main.go:143] libmachine: Plugin server listening at address 127.0.0.1:41591
	I1019 17:18:34.583209  312205 main.go:143] libmachine: () Calling .GetVersion
	I1019 17:18:34.583774  312205 main.go:143] libmachine: Using API Version  1
	I1019 17:18:34.583799  312205 main.go:143] libmachine: () Calling .SetConfigRaw
	I1019 17:18:34.584237  312205 main.go:143] libmachine: () Calling .GetMachineName
	I1019 17:18:34.584470  312205 main.go:143] libmachine: (kubernetes-upgrade-755918) Calling .DriverName
	I1019 17:18:34.584749  312205 driver.go:422] Setting default libvirt URI to qemu:///system
	I1019 17:18:34.585202  312205 main.go:143] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1019 17:18:34.585252  312205 main.go:143] libmachine: Launching plugin server for driver kvm2
	I1019 17:18:34.598512  312205 main.go:143] libmachine: Plugin server listening at address 127.0.0.1:44861
	I1019 17:18:34.598967  312205 main.go:143] libmachine: () Calling .GetVersion
	I1019 17:18:34.599452  312205 main.go:143] libmachine: Using API Version  1
	I1019 17:18:34.599494  312205 main.go:143] libmachine: () Calling .SetConfigRaw
	I1019 17:18:34.599864  312205 main.go:143] libmachine: () Calling .GetMachineName
	I1019 17:18:34.600100  312205 main.go:143] libmachine: (kubernetes-upgrade-755918) Calling .DriverName
	I1019 17:18:34.639130  312205 out.go:179] * Using the kvm2 driver based on existing profile
	I1019 17:18:34.640166  312205 start.go:309] selected driver: kvm2
	I1019 17:18:34.640189  312205 start.go:930] validating driver "kvm2" against &{Name:kubernetes-upgrade-755918 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.28.0 ClusterName:kubernetes-upgrade-755918 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.129 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bi
naryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1019 17:18:34.640326  312205 start.go:941] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1019 17:18:34.641412  312205 install.go:66] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1019 17:18:34.641517  312205 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21683-274250/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1019 17:18:34.657546  312205 install.go:163] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I1019 17:18:34.657593  312205 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21683-274250/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1019 17:18:34.671644  312205 install.go:163] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I1019 17:18:34.672294  312205 cni.go:84] Creating CNI manager for ""
	I1019 17:18:34.672384  312205 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1019 17:18:34.672446  312205 start.go:353] cluster config:
	{Name:kubernetes-upgrade-755918 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:kubernetes-upgrade-755918 Namespace:d
efault APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.129 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:
false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1019 17:18:34.672666  312205 iso.go:125] acquiring lock: {Name:mk7c0069e2cf0a68d4955dec96c59ff341a488dd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1019 17:18:34.674481  312205 out.go:179] * Starting "kubernetes-upgrade-755918" primary control-plane node in "kubernetes-upgrade-755918" cluster
	I1019 17:18:34.675573  312205 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1019 17:18:34.675633  312205 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21683-274250/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1019 17:18:34.675649  312205 cache.go:59] Caching tarball of preloaded images
	I1019 17:18:34.675793  312205 preload.go:233] Found /home/jenkins/minikube-integration/21683-274250/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1019 17:18:34.675821  312205 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1019 17:18:34.675925  312205 profile.go:143] Saving config to /home/jenkins/minikube-integration/21683-274250/.minikube/profiles/kubernetes-upgrade-755918/config.json ...
	I1019 17:18:34.676160  312205 start.go:360] acquireMachinesLock for kubernetes-upgrade-755918: {Name:mk3b19946e20646ec6cf08c56ebb92a1f48fa1bb Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1019 17:18:34.676218  312205 start.go:364] duration metric: took 32.439µs to acquireMachinesLock for "kubernetes-upgrade-755918"
	I1019 17:18:34.676240  312205 start.go:96] Skipping create...Using existing machine configuration
	I1019 17:18:34.676249  312205 fix.go:54] fixHost starting: 
	I1019 17:18:34.676548  312205 main.go:143] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1019 17:18:34.676587  312205 main.go:143] libmachine: Launching plugin server for driver kvm2
	I1019 17:18:34.690872  312205 main.go:143] libmachine: Plugin server listening at address 127.0.0.1:41309
	I1019 17:18:34.691446  312205 main.go:143] libmachine: () Calling .GetVersion
	I1019 17:18:34.691915  312205 main.go:143] libmachine: Using API Version  1
	I1019 17:18:34.691936  312205 main.go:143] libmachine: () Calling .SetConfigRaw
	I1019 17:18:34.692365  312205 main.go:143] libmachine: () Calling .GetMachineName
	I1019 17:18:34.692575  312205 main.go:143] libmachine: (kubernetes-upgrade-755918) Calling .DriverName
	I1019 17:18:34.692775  312205 main.go:143] libmachine: (kubernetes-upgrade-755918) Calling .GetState
	I1019 17:18:34.694821  312205 fix.go:112] recreateIfNeeded on kubernetes-upgrade-755918: state=Stopped err=<nil>
	I1019 17:18:34.694871  312205 main.go:143] libmachine: (kubernetes-upgrade-755918) Calling .DriverName
	W1019 17:18:34.695059  312205 fix.go:138] unexpected machine state, will restart: <nil>
	I1019 17:18:32.264341  311731 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.179582689s)
	I1019 17:18:32.264409  311731 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1019 17:18:32.578794  311731 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1019 17:18:32.665483  311731 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1019 17:18:32.773184  311731 api_server.go:52] waiting for apiserver process to appear ...
	I1019 17:18:32.773280  311731 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1019 17:18:33.273421  311731 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1019 17:18:33.774107  311731 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1019 17:18:33.808910  311731 api_server.go:72] duration metric: took 1.035739574s to wait for apiserver process to appear ...
	I1019 17:18:33.808954  311731 api_server.go:88] waiting for apiserver healthz status ...
	I1019 17:18:33.809009  311731 api_server.go:253] Checking apiserver healthz at https://192.168.39.42:8443/healthz ...
	I1019 17:18:33.809669  311731 api_server.go:269] stopped: https://192.168.39.42:8443/healthz: Get "https://192.168.39.42:8443/healthz": dial tcp 192.168.39.42:8443: connect: connection refused
	I1019 17:18:34.309080  311731 api_server.go:253] Checking apiserver healthz at https://192.168.39.42:8443/healthz ...
	I1019 17:18:36.327971  311731 api_server.go:279] https://192.168.39.42:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1019 17:18:36.328025  311731 api_server.go:103] status: https://192.168.39.42:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1019 17:18:36.328045  311731 api_server.go:253] Checking apiserver healthz at https://192.168.39.42:8443/healthz ...
	I1019 17:18:36.364369  311731 api_server.go:279] https://192.168.39.42:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1019 17:18:36.364402  311731 api_server.go:103] status: https://192.168.39.42:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1019 17:18:36.809644  311731 api_server.go:253] Checking apiserver healthz at https://192.168.39.42:8443/healthz ...
	I1019 17:18:36.818067  311731 api_server.go:279] https://192.168.39.42:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1019 17:18:36.818102  311731 api_server.go:103] status: https://192.168.39.42:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1019 17:18:37.309581  311731 api_server.go:253] Checking apiserver healthz at https://192.168.39.42:8443/healthz ...
	I1019 17:18:37.316933  311731 api_server.go:279] https://192.168.39.42:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1019 17:18:37.316968  311731 api_server.go:103] status: https://192.168.39.42:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1019 17:18:37.809679  311731 api_server.go:253] Checking apiserver healthz at https://192.168.39.42:8443/healthz ...
	I1019 17:18:37.815234  311731 api_server.go:279] https://192.168.39.42:8443/healthz returned 200:
	ok
	I1019 17:18:37.822834  311731 api_server.go:141] control plane version: v1.34.1
	I1019 17:18:37.822867  311731 api_server.go:131] duration metric: took 4.013901901s to wait for apiserver health ...
	I1019 17:18:37.822880  311731 cni.go:84] Creating CNI manager for ""
	I1019 17:18:37.822889  311731 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1019 17:18:37.824529  311731 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I1019 17:18:37.825744  311731 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1019 17:18:37.844782  311731 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1019 17:18:37.869522  311731 system_pods.go:43] waiting for kube-system pods to appear ...
	I1019 17:18:37.874784  311731 system_pods.go:59] 6 kube-system pods found
	I1019 17:18:37.874829  311731 system_pods.go:61] "coredns-66bc5c9577-z9rqv" [7655a35b-ffaf-424b-8a40-627a6a3e5b1e] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1019 17:18:37.874837  311731 system_pods.go:61] "etcd-pause-046984" [b9d1bfc4-d889-4919-8387-11ce6083bf8f] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1019 17:18:37.874848  311731 system_pods.go:61] "kube-apiserver-pause-046984" [d3ffb7b1-34e4-4e0f-88ea-20958de7b2fd] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1019 17:18:37.874858  311731 system_pods.go:61] "kube-controller-manager-pause-046984" [461316d6-bb1e-4450-b216-959f836a75fb] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1019 17:18:37.874871  311731 system_pods.go:61] "kube-proxy-mnsqf" [bcef04ef-3072-4b46-becb-1e7804e25d88] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1019 17:18:37.874881  311731 system_pods.go:61] "kube-scheduler-pause-046984" [5ac163ed-c77e-4b33-8743-2e16841ec8ce] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1019 17:18:37.874893  311731 system_pods.go:74] duration metric: took 5.349375ms to wait for pod list to return data ...
	I1019 17:18:37.874904  311731 node_conditions.go:102] verifying NodePressure condition ...
	I1019 17:18:37.878487  311731 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1019 17:18:37.878519  311731 node_conditions.go:123] node cpu capacity is 2
	I1019 17:18:37.878536  311731 node_conditions.go:105] duration metric: took 3.625884ms to run NodePressure ...
	I1019 17:18:37.878598  311731 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1019 17:18:38.144840  311731 kubeadm.go:729] waiting for restarted kubelet to initialise ...
	I1019 17:18:38.149651  311731 kubeadm.go:744] kubelet initialised
	I1019 17:18:38.149673  311731 kubeadm.go:745] duration metric: took 4.805641ms waiting for restarted kubelet to initialise ...
	I1019 17:18:38.149689  311731 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1019 17:18:38.166874  311731 ops.go:34] apiserver oom_adj: -16
	I1019 17:18:38.166894  311731 kubeadm.go:602] duration metric: took 7.479990694s to restartPrimaryControlPlane
	I1019 17:18:38.166903  311731 kubeadm.go:403] duration metric: took 7.604585127s to StartCluster
	I1019 17:18:38.166925  311731 settings.go:142] acquiring lock: {Name:mkf8e8333d0302d1bf1fad4a2ff30b0524cb52b1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 17:18:38.167019  311731 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21683-274250/kubeconfig
	I1019 17:18:38.168217  311731 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-274250/kubeconfig: {Name:mk22311d445eddc7a50c63a1389fab4cf9c803b2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 17:18:38.168482  311731 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.39.42 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1019 17:18:38.168544  311731 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1019 17:18:38.168809  311731 config.go:182] Loaded profile config "pause-046984": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 17:18:38.170291  311731 out.go:179] * Verifying Kubernetes components...
	I1019 17:18:38.170987  311731 out.go:179] * Enabled addons: 
	I1019 17:18:34.696671  312205 out.go:252] * Restarting existing kvm2 VM for "kubernetes-upgrade-755918" ...
	I1019 17:18:34.696706  312205 main.go:143] libmachine: (kubernetes-upgrade-755918) Calling .Start
	I1019 17:18:34.696892  312205 main.go:143] libmachine: (kubernetes-upgrade-755918) starting domain...
	I1019 17:18:34.696919  312205 main.go:143] libmachine: (kubernetes-upgrade-755918) ensuring networks are active...
	I1019 17:18:34.697741  312205 main.go:143] libmachine: (kubernetes-upgrade-755918) Ensuring network default is active
	I1019 17:18:34.698254  312205 main.go:143] libmachine: (kubernetes-upgrade-755918) Ensuring network mk-kubernetes-upgrade-755918 is active
	I1019 17:18:34.698734  312205 main.go:143] libmachine: (kubernetes-upgrade-755918) getting domain XML...
	I1019 17:18:34.699898  312205 main.go:143] libmachine: (kubernetes-upgrade-755918) DBG | starting domain XML:
	I1019 17:18:34.699943  312205 main.go:143] libmachine: (kubernetes-upgrade-755918) DBG | <domain type='kvm'>
	I1019 17:18:34.699955  312205 main.go:143] libmachine: (kubernetes-upgrade-755918) DBG |   <name>kubernetes-upgrade-755918</name>
	I1019 17:18:34.699963  312205 main.go:143] libmachine: (kubernetes-upgrade-755918) DBG |   <uuid>75d5236c-07d1-42f1-90c0-4c47e14e6c1c</uuid>
	I1019 17:18:34.699971  312205 main.go:143] libmachine: (kubernetes-upgrade-755918) DBG |   <memory unit='KiB'>3145728</memory>
	I1019 17:18:34.699996  312205 main.go:143] libmachine: (kubernetes-upgrade-755918) DBG |   <currentMemory unit='KiB'>3145728</currentMemory>
	I1019 17:18:34.700007  312205 main.go:143] libmachine: (kubernetes-upgrade-755918) DBG |   <vcpu placement='static'>2</vcpu>
	I1019 17:18:34.700015  312205 main.go:143] libmachine: (kubernetes-upgrade-755918) DBG |   <os>
	I1019 17:18:34.700025  312205 main.go:143] libmachine: (kubernetes-upgrade-755918) DBG |     <type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
	I1019 17:18:34.700045  312205 main.go:143] libmachine: (kubernetes-upgrade-755918) DBG |     <boot dev='cdrom'/>
	I1019 17:18:34.700055  312205 main.go:143] libmachine: (kubernetes-upgrade-755918) DBG |     <boot dev='hd'/>
	I1019 17:18:34.700063  312205 main.go:143] libmachine: (kubernetes-upgrade-755918) DBG |     <bootmenu enable='no'/>
	I1019 17:18:34.700071  312205 main.go:143] libmachine: (kubernetes-upgrade-755918) DBG |   </os>
	I1019 17:18:34.700078  312205 main.go:143] libmachine: (kubernetes-upgrade-755918) DBG |   <features>
	I1019 17:18:34.700086  312205 main.go:143] libmachine: (kubernetes-upgrade-755918) DBG |     <acpi/>
	I1019 17:18:34.700093  312205 main.go:143] libmachine: (kubernetes-upgrade-755918) DBG |     <apic/>
	I1019 17:18:34.700101  312205 main.go:143] libmachine: (kubernetes-upgrade-755918) DBG |     <pae/>
	I1019 17:18:34.700108  312205 main.go:143] libmachine: (kubernetes-upgrade-755918) DBG |   </features>
	I1019 17:18:34.700119  312205 main.go:143] libmachine: (kubernetes-upgrade-755918) DBG |   <cpu mode='host-passthrough' check='none' migratable='on'/>
	I1019 17:18:34.700127  312205 main.go:143] libmachine: (kubernetes-upgrade-755918) DBG |   <clock offset='utc'/>
	I1019 17:18:34.700135  312205 main.go:143] libmachine: (kubernetes-upgrade-755918) DBG |   <on_poweroff>destroy</on_poweroff>
	I1019 17:18:34.700142  312205 main.go:143] libmachine: (kubernetes-upgrade-755918) DBG |   <on_reboot>restart</on_reboot>
	I1019 17:18:34.700150  312205 main.go:143] libmachine: (kubernetes-upgrade-755918) DBG |   <on_crash>destroy</on_crash>
	I1019 17:18:34.700156  312205 main.go:143] libmachine: (kubernetes-upgrade-755918) DBG |   <devices>
	I1019 17:18:34.700165  312205 main.go:143] libmachine: (kubernetes-upgrade-755918) DBG |     <emulator>/usr/bin/qemu-system-x86_64</emulator>
	I1019 17:18:34.700172  312205 main.go:143] libmachine: (kubernetes-upgrade-755918) DBG |     <disk type='file' device='cdrom'>
	I1019 17:18:34.700190  312205 main.go:143] libmachine: (kubernetes-upgrade-755918) DBG |       <driver name='qemu' type='raw'/>
	I1019 17:18:34.700206  312205 main.go:143] libmachine: (kubernetes-upgrade-755918) DBG |       <source file='/home/jenkins/minikube-integration/21683-274250/.minikube/machines/kubernetes-upgrade-755918/boot2docker.iso'/>
	I1019 17:18:34.700216  312205 main.go:143] libmachine: (kubernetes-upgrade-755918) DBG |       <target dev='hdc' bus='scsi'/>
	I1019 17:18:34.700223  312205 main.go:143] libmachine: (kubernetes-upgrade-755918) DBG |       <readonly/>
	I1019 17:18:34.700233  312205 main.go:143] libmachine: (kubernetes-upgrade-755918) DBG |       <address type='drive' controller='0' bus='0' target='0' unit='2'/>
	I1019 17:18:34.700240  312205 main.go:143] libmachine: (kubernetes-upgrade-755918) DBG |     </disk>
	I1019 17:18:34.700248  312205 main.go:143] libmachine: (kubernetes-upgrade-755918) DBG |     <disk type='file' device='disk'>
	I1019 17:18:34.700257  312205 main.go:143] libmachine: (kubernetes-upgrade-755918) DBG |       <driver name='qemu' type='raw' io='threads'/>
	I1019 17:18:34.700280  312205 main.go:143] libmachine: (kubernetes-upgrade-755918) DBG |       <source file='/home/jenkins/minikube-integration/21683-274250/.minikube/machines/kubernetes-upgrade-755918/kubernetes-upgrade-755918.rawdisk'/>
	I1019 17:18:34.700287  312205 main.go:143] libmachine: (kubernetes-upgrade-755918) DBG |       <target dev='hda' bus='virtio'/>
	I1019 17:18:34.700298  312205 main.go:143] libmachine: (kubernetes-upgrade-755918) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
	I1019 17:18:34.700305  312205 main.go:143] libmachine: (kubernetes-upgrade-755918) DBG |     </disk>
	I1019 17:18:34.700314  312205 main.go:143] libmachine: (kubernetes-upgrade-755918) DBG |     <controller type='usb' index='0' model='piix3-uhci'>
	I1019 17:18:34.700324  312205 main.go:143] libmachine: (kubernetes-upgrade-755918) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
	I1019 17:18:34.700332  312205 main.go:143] libmachine: (kubernetes-upgrade-755918) DBG |     </controller>
	I1019 17:18:34.700339  312205 main.go:143] libmachine: (kubernetes-upgrade-755918) DBG |     <controller type='pci' index='0' model='pci-root'/>
	I1019 17:18:34.700355  312205 main.go:143] libmachine: (kubernetes-upgrade-755918) DBG |     <controller type='scsi' index='0' model='lsilogic'>
	I1019 17:18:34.700364  312205 main.go:143] libmachine: (kubernetes-upgrade-755918) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
	I1019 17:18:34.700372  312205 main.go:143] libmachine: (kubernetes-upgrade-755918) DBG |     </controller>
	I1019 17:18:34.700379  312205 main.go:143] libmachine: (kubernetes-upgrade-755918) DBG |     <interface type='network'>
	I1019 17:18:34.700388  312205 main.go:143] libmachine: (kubernetes-upgrade-755918) DBG |       <mac address='52:54:00:93:e7:4d'/>
	I1019 17:18:34.700395  312205 main.go:143] libmachine: (kubernetes-upgrade-755918) DBG |       <source network='mk-kubernetes-upgrade-755918'/>
	I1019 17:18:34.700403  312205 main.go:143] libmachine: (kubernetes-upgrade-755918) DBG |       <model type='virtio'/>
	I1019 17:18:34.700412  312205 main.go:143] libmachine: (kubernetes-upgrade-755918) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
	I1019 17:18:34.700419  312205 main.go:143] libmachine: (kubernetes-upgrade-755918) DBG |     </interface>
	I1019 17:18:34.700426  312205 main.go:143] libmachine: (kubernetes-upgrade-755918) DBG |     <interface type='network'>
	I1019 17:18:34.700437  312205 main.go:143] libmachine: (kubernetes-upgrade-755918) DBG |       <mac address='52:54:00:c3:f9:8d'/>
	I1019 17:18:34.700444  312205 main.go:143] libmachine: (kubernetes-upgrade-755918) DBG |       <source network='default'/>
	I1019 17:18:34.700452  312205 main.go:143] libmachine: (kubernetes-upgrade-755918) DBG |       <model type='virtio'/>
	I1019 17:18:34.700461  312205 main.go:143] libmachine: (kubernetes-upgrade-755918) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
	I1019 17:18:34.700469  312205 main.go:143] libmachine: (kubernetes-upgrade-755918) DBG |     </interface>
	I1019 17:18:34.700482  312205 main.go:143] libmachine: (kubernetes-upgrade-755918) DBG |     <serial type='pty'>
	I1019 17:18:34.700493  312205 main.go:143] libmachine: (kubernetes-upgrade-755918) DBG |       <target type='isa-serial' port='0'>
	I1019 17:18:34.700499  312205 main.go:143] libmachine: (kubernetes-upgrade-755918) DBG |         <model name='isa-serial'/>
	I1019 17:18:34.700507  312205 main.go:143] libmachine: (kubernetes-upgrade-755918) DBG |       </target>
	I1019 17:18:34.700512  312205 main.go:143] libmachine: (kubernetes-upgrade-755918) DBG |     </serial>
	I1019 17:18:34.700519  312205 main.go:143] libmachine: (kubernetes-upgrade-755918) DBG |     <console type='pty'>
	I1019 17:18:34.700525  312205 main.go:143] libmachine: (kubernetes-upgrade-755918) DBG |       <target type='serial' port='0'/>
	I1019 17:18:34.700534  312205 main.go:143] libmachine: (kubernetes-upgrade-755918) DBG |     </console>
	I1019 17:18:34.700540  312205 main.go:143] libmachine: (kubernetes-upgrade-755918) DBG |     <input type='mouse' bus='ps2'/>
	I1019 17:18:34.700547  312205 main.go:143] libmachine: (kubernetes-upgrade-755918) DBG |     <input type='keyboard' bus='ps2'/>
	I1019 17:18:34.700553  312205 main.go:143] libmachine: (kubernetes-upgrade-755918) DBG |     <audio id='1' type='none'/>
	I1019 17:18:34.700560  312205 main.go:143] libmachine: (kubernetes-upgrade-755918) DBG |     <memballoon model='virtio'>
	I1019 17:18:34.700568  312205 main.go:143] libmachine: (kubernetes-upgrade-755918) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
	I1019 17:18:34.700576  312205 main.go:143] libmachine: (kubernetes-upgrade-755918) DBG |     </memballoon>
	I1019 17:18:34.700582  312205 main.go:143] libmachine: (kubernetes-upgrade-755918) DBG |     <rng model='virtio'>
	I1019 17:18:34.700591  312205 main.go:143] libmachine: (kubernetes-upgrade-755918) DBG |       <backend model='random'>/dev/random</backend>
	I1019 17:18:34.700599  312205 main.go:143] libmachine: (kubernetes-upgrade-755918) DBG |       <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
	I1019 17:18:34.700608  312205 main.go:143] libmachine: (kubernetes-upgrade-755918) DBG |     </rng>
	I1019 17:18:34.700625  312205 main.go:143] libmachine: (kubernetes-upgrade-755918) DBG |   </devices>
	I1019 17:18:34.700633  312205 main.go:143] libmachine: (kubernetes-upgrade-755918) DBG | </domain>
	I1019 17:18:34.700641  312205 main.go:143] libmachine: (kubernetes-upgrade-755918) DBG | 
	I1019 17:18:36.207669  312205 main.go:143] libmachine: (kubernetes-upgrade-755918) waiting for domain to start...
	I1019 17:18:36.209248  312205 main.go:143] libmachine: (kubernetes-upgrade-755918) domain is now running
	I1019 17:18:36.209278  312205 main.go:143] libmachine: (kubernetes-upgrade-755918) waiting for IP...
	I1019 17:18:36.210248  312205 main.go:143] libmachine: (kubernetes-upgrade-755918) DBG | domain kubernetes-upgrade-755918 has defined MAC address 52:54:00:93:e7:4d in network mk-kubernetes-upgrade-755918
	I1019 17:18:36.210890  312205 main.go:143] libmachine: (kubernetes-upgrade-755918) found domain IP: 192.168.50.129
	I1019 17:18:36.210917  312205 main.go:143] libmachine: (kubernetes-upgrade-755918) DBG | domain kubernetes-upgrade-755918 has current primary IP address 192.168.50.129 and MAC address 52:54:00:93:e7:4d in network mk-kubernetes-upgrade-755918
	I1019 17:18:36.210925  312205 main.go:143] libmachine: (kubernetes-upgrade-755918) reserving static IP address...
	I1019 17:18:36.211448  312205 main.go:143] libmachine: (kubernetes-upgrade-755918) DBG | found host DHCP lease matching {name: "kubernetes-upgrade-755918", mac: "52:54:00:93:e7:4d", ip: "192.168.50.129"} in network mk-kubernetes-upgrade-755918: {Iface:virbr2 ExpiryTime:2025-10-19 18:18:02 +0000 UTC Type:0 Mac:52:54:00:93:e7:4d Iaid: IPaddr:192.168.50.129 Prefix:24 Hostname:kubernetes-upgrade-755918 Clientid:01:52:54:00:93:e7:4d}
	I1019 17:18:36.211481  312205 main.go:143] libmachine: (kubernetes-upgrade-755918) DBG | skip adding static IP to network mk-kubernetes-upgrade-755918 - found existing host DHCP lease matching {name: "kubernetes-upgrade-755918", mac: "52:54:00:93:e7:4d", ip: "192.168.50.129"}
	I1019 17:18:36.211502  312205 main.go:143] libmachine: (kubernetes-upgrade-755918) reserved static IP address 192.168.50.129 for domain kubernetes-upgrade-755918
	I1019 17:18:36.211519  312205 main.go:143] libmachine: (kubernetes-upgrade-755918) waiting for SSH...
	I1019 17:18:36.211528  312205 main.go:143] libmachine: (kubernetes-upgrade-755918) DBG | Getting to WaitForSSH function...
	I1019 17:18:36.214189  312205 main.go:143] libmachine: (kubernetes-upgrade-755918) DBG | domain kubernetes-upgrade-755918 has defined MAC address 52:54:00:93:e7:4d in network mk-kubernetes-upgrade-755918
	I1019 17:18:36.214544  312205 main.go:143] libmachine: (kubernetes-upgrade-755918) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:e7:4d", ip: ""} in network mk-kubernetes-upgrade-755918: {Iface:virbr2 ExpiryTime:2025-10-19 18:18:02 +0000 UTC Type:0 Mac:52:54:00:93:e7:4d Iaid: IPaddr:192.168.50.129 Prefix:24 Hostname:kubernetes-upgrade-755918 Clientid:01:52:54:00:93:e7:4d}
	I1019 17:18:36.214575  312205 main.go:143] libmachine: (kubernetes-upgrade-755918) DBG | domain kubernetes-upgrade-755918 has defined IP address 192.168.50.129 and MAC address 52:54:00:93:e7:4d in network mk-kubernetes-upgrade-755918
	I1019 17:18:36.214780  312205 main.go:143] libmachine: (kubernetes-upgrade-755918) DBG | Using SSH client type: external
	I1019 17:18:36.214810  312205 main.go:143] libmachine: (kubernetes-upgrade-755918) DBG | Using SSH private key: /home/jenkins/minikube-integration/21683-274250/.minikube/machines/kubernetes-upgrade-755918/id_rsa (-rw-------)
	I1019 17:18:36.214849  312205 main.go:143] libmachine: (kubernetes-upgrade-755918) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.129 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/21683-274250/.minikube/machines/kubernetes-upgrade-755918/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1019 17:18:36.214884  312205 main.go:143] libmachine: (kubernetes-upgrade-755918) DBG | About to run SSH command:
	I1019 17:18:36.214924  312205 main.go:143] libmachine: (kubernetes-upgrade-755918) DBG | exit 0
	I1019 17:18:40.163219  312118 out.go:177] * Starting control plane node stopped-upgrade-254072 in cluster stopped-upgrade-254072
	I1019 17:18:40.164337  312118 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime crio
	I1019 17:18:40.269180  312118 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.3/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4
	I1019 17:18:40.269202  312118 cache.go:56] Caching tarball of preloaded images
	I1019 17:18:40.269358  312118 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime crio
	I1019 17:18:40.270897  312118 out.go:177] * Downloading Kubernetes v1.28.3 preload ...
	I1019 17:18:40.271889  312118 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4 ...
	I1019 17:18:40.384487  312118 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.3/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4?checksum=md5:6681d82b7b719ef3324102b709ec62eb -> /home/jenkins/minikube-integration/21683-274250/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4
	I1019 17:18:38.171635  311731 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1019 17:18:38.172188  311731 addons.go:515] duration metric: took 3.654089ms for enable addons: enabled=[]
	I1019 17:18:38.406341  311731 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1019 17:18:38.449239  311731 node_ready.go:35] waiting up to 6m0s for node "pause-046984" to be "Ready" ...
	I1019 17:18:38.455635  311731 node_ready.go:49] node "pause-046984" is "Ready"
	I1019 17:18:38.455695  311731 node_ready.go:38] duration metric: took 6.402162ms for node "pause-046984" to be "Ready" ...
	I1019 17:18:38.455719  311731 api_server.go:52] waiting for apiserver process to appear ...
	I1019 17:18:38.455789  311731 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1019 17:18:38.486052  311731 api_server.go:72] duration metric: took 317.530927ms to wait for apiserver process to appear ...
	I1019 17:18:38.486082  311731 api_server.go:88] waiting for apiserver healthz status ...
	I1019 17:18:38.486103  311731 api_server.go:253] Checking apiserver healthz at https://192.168.39.42:8443/healthz ...
	I1019 17:18:38.492335  311731 api_server.go:279] https://192.168.39.42:8443/healthz returned 200:
	ok
	I1019 17:18:38.494599  311731 api_server.go:141] control plane version: v1.34.1
	I1019 17:18:38.494625  311731 api_server.go:131] duration metric: took 8.534531ms to wait for apiserver health ...
	I1019 17:18:38.494635  311731 system_pods.go:43] waiting for kube-system pods to appear ...
	I1019 17:18:38.497591  311731 system_pods.go:59] 6 kube-system pods found
	I1019 17:18:38.497635  311731 system_pods.go:61] "coredns-66bc5c9577-z9rqv" [7655a35b-ffaf-424b-8a40-627a6a3e5b1e] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1019 17:18:38.497647  311731 system_pods.go:61] "etcd-pause-046984" [b9d1bfc4-d889-4919-8387-11ce6083bf8f] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1019 17:18:38.497657  311731 system_pods.go:61] "kube-apiserver-pause-046984" [d3ffb7b1-34e4-4e0f-88ea-20958de7b2fd] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1019 17:18:38.497671  311731 system_pods.go:61] "kube-controller-manager-pause-046984" [461316d6-bb1e-4450-b216-959f836a75fb] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1019 17:18:38.497680  311731 system_pods.go:61] "kube-proxy-mnsqf" [bcef04ef-3072-4b46-becb-1e7804e25d88] Running
	I1019 17:18:38.497691  311731 system_pods.go:61] "kube-scheduler-pause-046984" [5ac163ed-c77e-4b33-8743-2e16841ec8ce] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1019 17:18:38.497700  311731 system_pods.go:74] duration metric: took 3.057838ms to wait for pod list to return data ...
	I1019 17:18:38.497713  311731 default_sa.go:34] waiting for default service account to be created ...
	I1019 17:18:38.500082  311731 default_sa.go:45] found service account: "default"
	I1019 17:18:38.500104  311731 default_sa.go:55] duration metric: took 2.382893ms for default service account to be created ...
	I1019 17:18:38.500115  311731 system_pods.go:116] waiting for k8s-apps to be running ...
	I1019 17:18:38.503490  311731 system_pods.go:86] 6 kube-system pods found
	I1019 17:18:38.503534  311731 system_pods.go:89] "coredns-66bc5c9577-z9rqv" [7655a35b-ffaf-424b-8a40-627a6a3e5b1e] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1019 17:18:38.503548  311731 system_pods.go:89] "etcd-pause-046984" [b9d1bfc4-d889-4919-8387-11ce6083bf8f] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1019 17:18:38.503565  311731 system_pods.go:89] "kube-apiserver-pause-046984" [d3ffb7b1-34e4-4e0f-88ea-20958de7b2fd] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1019 17:18:38.503584  311731 system_pods.go:89] "kube-controller-manager-pause-046984" [461316d6-bb1e-4450-b216-959f836a75fb] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1019 17:18:38.503594  311731 system_pods.go:89] "kube-proxy-mnsqf" [bcef04ef-3072-4b46-becb-1e7804e25d88] Running
	I1019 17:18:38.503610  311731 system_pods.go:89] "kube-scheduler-pause-046984" [5ac163ed-c77e-4b33-8743-2e16841ec8ce] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1019 17:18:38.503623  311731 system_pods.go:126] duration metric: took 3.500524ms to wait for k8s-apps to be running ...
	I1019 17:18:38.503639  311731 system_svc.go:44] waiting for kubelet service to be running ....
	I1019 17:18:38.503696  311731 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1019 17:18:38.525624  311731 system_svc.go:56] duration metric: took 21.969985ms WaitForService to wait for kubelet
	I1019 17:18:38.525665  311731 kubeadm.go:587] duration metric: took 357.148406ms to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1019 17:18:38.525694  311731 node_conditions.go:102] verifying NodePressure condition ...
	I1019 17:18:38.531445  311731 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1019 17:18:38.531482  311731 node_conditions.go:123] node cpu capacity is 2
	I1019 17:18:38.531503  311731 node_conditions.go:105] duration metric: took 5.800475ms to run NodePressure ...
	I1019 17:18:38.531525  311731 start.go:242] waiting for startup goroutines ...
	I1019 17:18:38.531547  311731 start.go:247] waiting for cluster config update ...
	I1019 17:18:38.531565  311731 start.go:256] writing updated cluster config ...
	I1019 17:18:38.532105  311731 ssh_runner.go:195] Run: rm -f paused
	I1019 17:18:38.538231  311731 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1019 17:18:38.539028  311731 kapi.go:59] client config for pause-046984: &rest.Config{Host:"https://192.168.39.42:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21683-274250/.minikube/profiles/pause-046984/client.crt", KeyFile:"/home/jenkins/minikube-integration/21683-274250/.minikube/profiles/pause-046984/client.key", CAFile:"/home/jenkins/minikube-integration/21683-274250/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]
string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2819d20), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1019 17:18:38.542967  311731 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-z9rqv" in "kube-system" namespace to be "Ready" or be gone ...
	W1019 17:18:40.548664  311731 pod_ready.go:104] pod "coredns-66bc5c9577-z9rqv" is not "Ready", error: <nil>
	W1019 17:18:42.550405  311731 pod_ready.go:104] pod "coredns-66bc5c9577-z9rqv" is not "Ready", error: <nil>
	W1019 17:18:44.551384  311731 pod_ready.go:104] pod "coredns-66bc5c9577-z9rqv" is not "Ready", error: <nil>
	I1019 17:18:47.050067  311731 pod_ready.go:94] pod "coredns-66bc5c9577-z9rqv" is "Ready"
	I1019 17:18:47.050097  311731 pod_ready.go:86] duration metric: took 8.507099917s for pod "coredns-66bc5c9577-z9rqv" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 17:18:47.053018  311731 pod_ready.go:83] waiting for pod "etcd-pause-046984" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 17:18:47.057837  311731 pod_ready.go:94] pod "etcd-pause-046984" is "Ready"
	I1019 17:18:47.057869  311731 pod_ready.go:86] duration metric: took 4.823069ms for pod "etcd-pause-046984" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 17:18:47.059870  311731 pod_ready.go:83] waiting for pod "kube-apiserver-pause-046984" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 17:18:47.064362  311731 pod_ready.go:94] pod "kube-apiserver-pause-046984" is "Ready"
	I1019 17:18:47.064384  311731 pod_ready.go:86] duration metric: took 4.490903ms for pod "kube-apiserver-pause-046984" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 17:18:47.066629  311731 pod_ready.go:83] waiting for pod "kube-controller-manager-pause-046984" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 17:18:47.483538  312205 main.go:143] libmachine: (kubernetes-upgrade-755918) DBG | SSH cmd err, output: exit status 255: 
	I1019 17:18:47.483571  312205 main.go:143] libmachine: (kubernetes-upgrade-755918) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I1019 17:18:47.483620  312205 main.go:143] libmachine: (kubernetes-upgrade-755918) DBG | command : exit 0
	I1019 17:18:47.483649  312205 main.go:143] libmachine: (kubernetes-upgrade-755918) DBG | err     : exit status 255
	I1019 17:18:47.483664  312205 main.go:143] libmachine: (kubernetes-upgrade-755918) DBG | output  : 
	I1019 17:18:48.572484  311731 pod_ready.go:94] pod "kube-controller-manager-pause-046984" is "Ready"
	I1019 17:18:48.572524  311731 pod_ready.go:86] duration metric: took 1.505869409s for pod "kube-controller-manager-pause-046984" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 17:18:48.646772  311731 pod_ready.go:83] waiting for pod "kube-proxy-mnsqf" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 17:18:49.047261  311731 pod_ready.go:94] pod "kube-proxy-mnsqf" is "Ready"
	I1019 17:18:49.047286  311731 pod_ready.go:86] duration metric: took 400.478809ms for pod "kube-proxy-mnsqf" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 17:18:49.246736  311731 pod_ready.go:83] waiting for pod "kube-scheduler-pause-046984" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 17:18:51.252658  311731 pod_ready.go:94] pod "kube-scheduler-pause-046984" is "Ready"
	I1019 17:18:51.252685  311731 pod_ready.go:86] duration metric: took 2.005921951s for pod "kube-scheduler-pause-046984" in "kube-system" namespace to be "Ready" or be gone ...
	I1019 17:18:51.252696  311731 pod_ready.go:40] duration metric: took 12.714424764s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1019 17:18:51.302454  311731 start.go:628] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1019 17:18:51.304006  311731 out.go:179] * Done! kubectl is now configured to use "pause-046984" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 19 17:18:54 pause-046984 crio[3072]: time="2025-10-19 17:18:54.178382055Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=f11d26f1-4c21-407b-a793-26a80d808898 name=/runtime.v1.RuntimeService/Version
	Oct 19 17:18:54 pause-046984 crio[3072]: time="2025-10-19 17:18:54.180178570Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=3a1cb967-6ba0-4b7b-8c27-3d48d148c89f name=/runtime.v1.ImageService/ImageFsInfo
	Oct 19 17:18:54 pause-046984 crio[3072]: time="2025-10-19 17:18:54.180784950Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1760894334180701925,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:127412,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=3a1cb967-6ba0-4b7b-8c27-3d48d148c89f name=/runtime.v1.ImageService/ImageFsInfo
	Oct 19 17:18:54 pause-046984 crio[3072]: time="2025-10-19 17:18:54.181687829Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=1ddd74e2-f7e4-45b0-be60-988d6b65994b name=/runtime.v1.RuntimeService/ListContainers
	Oct 19 17:18:54 pause-046984 crio[3072]: time="2025-10-19 17:18:54.181878318Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1ddd74e2-f7e4-45b0-be60-988d6b65994b name=/runtime.v1.RuntimeService/ListContainers
	Oct 19 17:18:54 pause-046984 crio[3072]: time="2025-10-19 17:18:54.182258843Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0ff02ae97123ca4f73d1283f137ed862cb834f144462c564fcb415cd278d3b58,PodSandboxId:b96e941b999897db2610e09acf5adda1c456cbb72d35d1f8324de75c061f39a4,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1760894317488330888,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-z9rqv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7655a35b-ffaf-424b-8a40-627a6a3e5b1e,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\
":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:94f2efe6ef28c01e17e71f2cc08b9349261aa5fbce6d2218d30742353b1c38b7,PodSandboxId:e8ffffbd2e7e4dfb7739748d1e0d843b7e0ad47a6dbc08ef2b765d03ffaa3c91,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1760894317145523758,Labels:map[string]s
tring{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mnsqf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bcef04ef-3072-4b46-becb-1e7804e25d88,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f9bba59a447faf09e55815d1c234a15d16ee5af14510e96d8c6e9507cb394080,PodSandboxId:b0140c18f7525a5ef3e43b32377ee8328a2ebe041864828350777802d992fc9b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1760894313588306780,Labels:map[string]string{io.kubernetes.container
.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-046984,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15b92579863fcd14c69b4cf471043b4b,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8591c4aa8ae3f813a48e89256c784d44e6f9a5f1f2d52c969ac66cb87bbbbcfd,PodSandboxId:3356e8fb22fe4eb275eb832f2a5f0c2e91c375d4b96c2232c40dfface11fa0e3,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:C
ONTAINER_RUNNING,CreatedAt:1760894313593041404,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-046984,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a3b2d4b76f920242b3eeb72a31f4a5b7,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3d708e0655eeba34296a163b236c8ff279367b83662a25ffabc351462036ac99,PodSandboxId:46e746be980ce8b6fca698a4468e34a7ef4bf5af83343924502c75db0de51f7f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,Runtime
Handler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1760894313545826383,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-046984,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e6ce92c92bde0baf1df2a20ff7c90fc3,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:662c4140aef8fdbd0e6b33fa27779f3406d9fdf0ace1ee302499af23c64cd12d,PodSandboxId:d1cb507aaf891f7d62474813d15801193144b757e9b15fa2435f2698350fb764,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c
3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1760894313492105297,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-046984,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1cad97fa3410eba4e391c0d82f1c5537,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ca460e4a893443d5e31ba0c33acef332ba3e98264273cae75f510d535dd8de4,PodSandboxId:928128e8edf7991a6e5d8dbf87518385425b952ae27fa3
bd4bc44a6921316608,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1760894298448998948,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-z9rqv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7655a35b-ffaf-424b-8a40-627a6a3e5b1e,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubern
etes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b4d75c20f540e595b1556128fd9bdec9bb473b7879f38e05c37ddb5be92d5533,PodSandboxId:e43710c0abb0489f94c5bfe85ccbfba17a8ccf06991d48dd6b57df1a8b676480,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_EXITED,CreatedAt:1760894297521253631,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mnsqf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bcef04ef-3072-4b46-becb-1e7804e25d88,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 1,io.
kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d94639566f241bd36a3beb47c8dc56cbc896b815c911ed276556b64c475ca4f6,PodSandboxId:a8b14d2d9fc70ac60f891c9219c3ebbe0770912b66d1fa1a77da8466a508a236,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_EXITED,CreatedAt:1760894297418101760,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-046984,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15b92579863fcd14c69b4cf471043b4b,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hos
tPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ee3dc4c66b7c4d68572ee374d8c9eb458e56d7c7883b1bfa63c9da942534f88a,PodSandboxId:dfe5467e0dd20a89b03b764663999847a3d4a010e6bcf4c0eb9d864fa6c077d1,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_EXITED,CreatedAt:1760894297383867226,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-046984,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e6ce92c92bde0baf1df2a20ff7c90fc3,},
Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:180dd09c14df72f6831b149aba1874f3461a6782e41b9c9f7b85c35a6f96b5a9,PodSandboxId:be6128732155f99a7e06eb4edf937366e57bbf6214a5e7e9e790163ca935d2d0,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_EXITED,CreatedAt:1760894297275087476,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-046984,io.kubernetes.pod.namespa
ce: kube-system,io.kubernetes.pod.uid: a3b2d4b76f920242b3eeb72a31f4a5b7,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4944d9dc741ea8a621669129b6109f09dd4b8b5e7258461f2d839fd79cc8b72b,PodSandboxId:3c2ec2999d1ad3a28704fa5bbbd2da11e667e41d3f099a47c8f6be26078c23b2,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_EXITED,CreatedAt:1760894297209510021,Labels:map[string]string{io.kubernetes.contai
ner.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-046984,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1cad97fa3410eba4e391c0d82f1c5537,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=1ddd74e2-f7e4-45b0-be60-988d6b65994b name=/runtime.v1.RuntimeService/ListContainers
	Oct 19 17:18:54 pause-046984 crio[3072]: time="2025-10-19 17:18:54.238956349Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=aea3762f-e5e6-4d6c-94e0-31723b441860 name=/runtime.v1.RuntimeService/Version
	Oct 19 17:18:54 pause-046984 crio[3072]: time="2025-10-19 17:18:54.239068723Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=aea3762f-e5e6-4d6c-94e0-31723b441860 name=/runtime.v1.RuntimeService/Version
	Oct 19 17:18:54 pause-046984 crio[3072]: time="2025-10-19 17:18:54.240623469Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=df2f4945-1e03-4620-85dd-e7abd83de668 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 19 17:18:54 pause-046984 crio[3072]: time="2025-10-19 17:18:54.241282472Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1760894334241247408,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:127412,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=df2f4945-1e03-4620-85dd-e7abd83de668 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 19 17:18:54 pause-046984 crio[3072]: time="2025-10-19 17:18:54.242312673Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=dcc9a01e-2ec4-49d4-9367-1e6c4c09eb2e name=/runtime.v1.RuntimeService/ListContainers
	Oct 19 17:18:54 pause-046984 crio[3072]: time="2025-10-19 17:18:54.242440387Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=dcc9a01e-2ec4-49d4-9367-1e6c4c09eb2e name=/runtime.v1.RuntimeService/ListContainers
	Oct 19 17:18:54 pause-046984 crio[3072]: time="2025-10-19 17:18:54.242876500Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0ff02ae97123ca4f73d1283f137ed862cb834f144462c564fcb415cd278d3b58,PodSandboxId:b96e941b999897db2610e09acf5adda1c456cbb72d35d1f8324de75c061f39a4,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1760894317488330888,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-z9rqv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7655a35b-ffaf-424b-8a40-627a6a3e5b1e,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\
":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:94f2efe6ef28c01e17e71f2cc08b9349261aa5fbce6d2218d30742353b1c38b7,PodSandboxId:e8ffffbd2e7e4dfb7739748d1e0d843b7e0ad47a6dbc08ef2b765d03ffaa3c91,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1760894317145523758,Labels:map[string]s
tring{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mnsqf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bcef04ef-3072-4b46-becb-1e7804e25d88,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f9bba59a447faf09e55815d1c234a15d16ee5af14510e96d8c6e9507cb394080,PodSandboxId:b0140c18f7525a5ef3e43b32377ee8328a2ebe041864828350777802d992fc9b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1760894313588306780,Labels:map[string]string{io.kubernetes.container
.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-046984,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15b92579863fcd14c69b4cf471043b4b,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8591c4aa8ae3f813a48e89256c784d44e6f9a5f1f2d52c969ac66cb87bbbbcfd,PodSandboxId:3356e8fb22fe4eb275eb832f2a5f0c2e91c375d4b96c2232c40dfface11fa0e3,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:C
ONTAINER_RUNNING,CreatedAt:1760894313593041404,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-046984,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a3b2d4b76f920242b3eeb72a31f4a5b7,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3d708e0655eeba34296a163b236c8ff279367b83662a25ffabc351462036ac99,PodSandboxId:46e746be980ce8b6fca698a4468e34a7ef4bf5af83343924502c75db0de51f7f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,Runtime
Handler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1760894313545826383,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-046984,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e6ce92c92bde0baf1df2a20ff7c90fc3,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:662c4140aef8fdbd0e6b33fa27779f3406d9fdf0ace1ee302499af23c64cd12d,PodSandboxId:d1cb507aaf891f7d62474813d15801193144b757e9b15fa2435f2698350fb764,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c
3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1760894313492105297,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-046984,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1cad97fa3410eba4e391c0d82f1c5537,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ca460e4a893443d5e31ba0c33acef332ba3e98264273cae75f510d535dd8de4,PodSandboxId:928128e8edf7991a6e5d8dbf87518385425b952ae27fa3
bd4bc44a6921316608,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1760894298448998948,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-z9rqv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7655a35b-ffaf-424b-8a40-627a6a3e5b1e,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubern
etes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b4d75c20f540e595b1556128fd9bdec9bb473b7879f38e05c37ddb5be92d5533,PodSandboxId:e43710c0abb0489f94c5bfe85ccbfba17a8ccf06991d48dd6b57df1a8b676480,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_EXITED,CreatedAt:1760894297521253631,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mnsqf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bcef04ef-3072-4b46-becb-1e7804e25d88,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 1,io.
kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d94639566f241bd36a3beb47c8dc56cbc896b815c911ed276556b64c475ca4f6,PodSandboxId:a8b14d2d9fc70ac60f891c9219c3ebbe0770912b66d1fa1a77da8466a508a236,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_EXITED,CreatedAt:1760894297418101760,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-046984,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15b92579863fcd14c69b4cf471043b4b,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hos
tPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ee3dc4c66b7c4d68572ee374d8c9eb458e56d7c7883b1bfa63c9da942534f88a,PodSandboxId:dfe5467e0dd20a89b03b764663999847a3d4a010e6bcf4c0eb9d864fa6c077d1,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_EXITED,CreatedAt:1760894297383867226,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-046984,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e6ce92c92bde0baf1df2a20ff7c90fc3,},
Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:180dd09c14df72f6831b149aba1874f3461a6782e41b9c9f7b85c35a6f96b5a9,PodSandboxId:be6128732155f99a7e06eb4edf937366e57bbf6214a5e7e9e790163ca935d2d0,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_EXITED,CreatedAt:1760894297275087476,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-046984,io.kubernetes.pod.namespa
ce: kube-system,io.kubernetes.pod.uid: a3b2d4b76f920242b3eeb72a31f4a5b7,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4944d9dc741ea8a621669129b6109f09dd4b8b5e7258461f2d839fd79cc8b72b,PodSandboxId:3c2ec2999d1ad3a28704fa5bbbd2da11e667e41d3f099a47c8f6be26078c23b2,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_EXITED,CreatedAt:1760894297209510021,Labels:map[string]string{io.kubernetes.contai
ner.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-046984,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1cad97fa3410eba4e391c0d82f1c5537,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=dcc9a01e-2ec4-49d4-9367-1e6c4c09eb2e name=/runtime.v1.RuntimeService/ListContainers
	Oct 19 17:18:54 pause-046984 crio[3072]: time="2025-10-19 17:18:54.283310087Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="otel-collector/interceptors.go:62" id=17b04751-b4a2-4ce2-a878-30c96e036811 name=/runtime.v1.RuntimeService/ListPodSandbox
	Oct 19 17:18:54 pause-046984 crio[3072]: time="2025-10-19 17:18:54.283683997Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:b96e941b999897db2610e09acf5adda1c456cbb72d35d1f8324de75c061f39a4,Metadata:&PodSandboxMetadata{Name:coredns-66bc5c9577-z9rqv,Uid:7655a35b-ffaf-424b-8a40-627a6a3e5b1e,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1760894317094199615,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-66bc5c9577-z9rqv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7655a35b-ffaf-424b-8a40-627a6a3e5b1e,k8s-app: kube-dns,pod-template-hash: 66bc5c9577,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-10-19T17:18:36.685037058Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:e8ffffbd2e7e4dfb7739748d1e0d843b7e0ad47a6dbc08ef2b765d03ffaa3c91,Metadata:&PodSandboxMetadata{Name:kube-proxy-mnsqf,Uid:bcef04ef-3072-4b46-becb-1e7804e25d88,Namespace:kube-system,Attempt
:2,},State:SANDBOX_READY,CreatedAt:1760894317017653731,Labels:map[string]string{controller-revision-hash: 66486579fc,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-mnsqf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bcef04ef-3072-4b46-becb-1e7804e25d88,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-10-19T17:18:36.685046868Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:3356e8fb22fe4eb275eb832f2a5f0c2e91c375d4b96c2232c40dfface11fa0e3,Metadata:&PodSandboxMetadata{Name:etcd-pause-046984,Uid:a3b2d4b76f920242b3eeb72a31f4a5b7,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1760894313253491612,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-pause-046984,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a3b2d4b76f920242b3eeb72a31f4a5b7,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/
etcd.advertise-client-urls: https://192.168.39.42:2379,kubernetes.io/config.hash: a3b2d4b76f920242b3eeb72a31f4a5b7,kubernetes.io/config.seen: 2025-10-19T17:18:32.695467837Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:46e746be980ce8b6fca698a4468e34a7ef4bf5af83343924502c75db0de51f7f,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-pause-046984,Uid:e6ce92c92bde0baf1df2a20ff7c90fc3,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1760894313247645789,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-pause-046984,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e6ce92c92bde0baf1df2a20ff7c90fc3,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: e6ce92c92bde0baf1df2a20ff7c90fc3,kubernetes.io/config.seen: 2025-10-19T17:18:32.695472303Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:d1cb507aaf891f7d62474813d158011931
44b757e9b15fa2435f2698350fb764,Metadata:&PodSandboxMetadata{Name:kube-apiserver-pause-046984,Uid:1cad97fa3410eba4e391c0d82f1c5537,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1760894313232524313,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-pause-046984,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1cad97fa3410eba4e391c0d82f1c5537,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.42:8443,kubernetes.io/config.hash: 1cad97fa3410eba4e391c0d82f1c5537,kubernetes.io/config.seen: 2025-10-19T17:18:32.695471096Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:b0140c18f7525a5ef3e43b32377ee8328a2ebe041864828350777802d992fc9b,Metadata:&PodSandboxMetadata{Name:kube-scheduler-pause-046984,Uid:15b92579863fcd14c69b4cf471043b4b,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1760894313231955167,Label
s:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-pause-046984,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15b92579863fcd14c69b4cf471043b4b,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 15b92579863fcd14c69b4cf471043b4b,kubernetes.io/config.seen: 2025-10-19T17:18:32.695473253Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:928128e8edf7991a6e5d8dbf87518385425b952ae27fa3bd4bc44a6921316608,Metadata:&PodSandboxMetadata{Name:coredns-66bc5c9577-z9rqv,Uid:7655a35b-ffaf-424b-8a40-627a6a3e5b1e,Namespace:kube-system,Attempt:1,},State:SANDBOX_NOTREADY,CreatedAt:1760894297053438342,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-66bc5c9577-z9rqv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7655a35b-ffaf-424b-8a40-627a6a3e5b1e,k8s-app: kube-dns,pod-template-hash: 66bc5c9577,},Annotations:map[string]string{kubernetes.io/c
onfig.seen: 2025-10-19T17:17:12.163851786Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:dfe5467e0dd20a89b03b764663999847a3d4a010e6bcf4c0eb9d864fa6c077d1,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-pause-046984,Uid:e6ce92c92bde0baf1df2a20ff7c90fc3,Namespace:kube-system,Attempt:1,},State:SANDBOX_NOTREADY,CreatedAt:1760894296776378033,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-pause-046984,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e6ce92c92bde0baf1df2a20ff7c90fc3,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: e6ce92c92bde0baf1df2a20ff7c90fc3,kubernetes.io/config.seen: 2025-10-19T17:17:06.268408747Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:e43710c0abb0489f94c5bfe85ccbfba17a8ccf06991d48dd6b57df1a8b676480,Metadata:&PodSandboxMetadata{Name:kube-proxy-mnsqf,Uid:bcef04ef-3072-4b46-becb-1e7804e25d88,Names
pace:kube-system,Attempt:1,},State:SANDBOX_NOTREADY,CreatedAt:1760894296775898333,Labels:map[string]string{controller-revision-hash: 66486579fc,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-mnsqf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bcef04ef-3072-4b46-becb-1e7804e25d88,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-10-19T17:17:12.107405484Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:be6128732155f99a7e06eb4edf937366e57bbf6214a5e7e9e790163ca935d2d0,Metadata:&PodSandboxMetadata{Name:etcd-pause-046984,Uid:a3b2d4b76f920242b3eeb72a31f4a5b7,Namespace:kube-system,Attempt:1,},State:SANDBOX_NOTREADY,CreatedAt:1760894296767477130,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-pause-046984,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a3b2d4b76f920242b3eeb72a31f4a5b7,tier: control-plane,},Annotations:map[string
]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.42:2379,kubernetes.io/config.hash: a3b2d4b76f920242b3eeb72a31f4a5b7,kubernetes.io/config.seen: 2025-10-19T17:17:06.268411427Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:a8b14d2d9fc70ac60f891c9219c3ebbe0770912b66d1fa1a77da8466a508a236,Metadata:&PodSandboxMetadata{Name:kube-scheduler-pause-046984,Uid:15b92579863fcd14c69b4cf471043b4b,Namespace:kube-system,Attempt:1,},State:SANDBOX_NOTREADY,CreatedAt:1760894296742500114,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-pause-046984,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15b92579863fcd14c69b4cf471043b4b,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 15b92579863fcd14c69b4cf471043b4b,kubernetes.io/config.seen: 2025-10-19T17:17:06.268410412Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:3c2ec2999d1ad3a28704fa5bbbd2
da11e667e41d3f099a47c8f6be26078c23b2,Metadata:&PodSandboxMetadata{Name:kube-apiserver-pause-046984,Uid:1cad97fa3410eba4e391c0d82f1c5537,Namespace:kube-system,Attempt:1,},State:SANDBOX_NOTREADY,CreatedAt:1760894296725387793,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-pause-046984,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1cad97fa3410eba4e391c0d82f1c5537,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.42:8443,kubernetes.io/config.hash: 1cad97fa3410eba4e391c0d82f1c5537,kubernetes.io/config.seen: 2025-10-19T17:17:06.268385083Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=17b04751-b4a2-4ce2-a878-30c96e036811 name=/runtime.v1.RuntimeService/ListPodSandbox
	Oct 19 17:18:54 pause-046984 crio[3072]: time="2025-10-19 17:18:54.284607216Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=fc56fc57-4ec7-476b-ab59-862ff785b6f3 name=/runtime.v1.RuntimeService/ListContainers
	Oct 19 17:18:54 pause-046984 crio[3072]: time="2025-10-19 17:18:54.284675299Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=fc56fc57-4ec7-476b-ab59-862ff785b6f3 name=/runtime.v1.RuntimeService/ListContainers
	Oct 19 17:18:54 pause-046984 crio[3072]: time="2025-10-19 17:18:54.286802323Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0ff02ae97123ca4f73d1283f137ed862cb834f144462c564fcb415cd278d3b58,PodSandboxId:b96e941b999897db2610e09acf5adda1c456cbb72d35d1f8324de75c061f39a4,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1760894317488330888,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-z9rqv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7655a35b-ffaf-424b-8a40-627a6a3e5b1e,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\
":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:94f2efe6ef28c01e17e71f2cc08b9349261aa5fbce6d2218d30742353b1c38b7,PodSandboxId:e8ffffbd2e7e4dfb7739748d1e0d843b7e0ad47a6dbc08ef2b765d03ffaa3c91,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1760894317145523758,Labels:map[string]s
tring{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mnsqf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bcef04ef-3072-4b46-becb-1e7804e25d88,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f9bba59a447faf09e55815d1c234a15d16ee5af14510e96d8c6e9507cb394080,PodSandboxId:b0140c18f7525a5ef3e43b32377ee8328a2ebe041864828350777802d992fc9b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1760894313588306780,Labels:map[string]string{io.kubernetes.container
.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-046984,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15b92579863fcd14c69b4cf471043b4b,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8591c4aa8ae3f813a48e89256c784d44e6f9a5f1f2d52c969ac66cb87bbbbcfd,PodSandboxId:3356e8fb22fe4eb275eb832f2a5f0c2e91c375d4b96c2232c40dfface11fa0e3,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:C
ONTAINER_RUNNING,CreatedAt:1760894313593041404,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-046984,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a3b2d4b76f920242b3eeb72a31f4a5b7,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3d708e0655eeba34296a163b236c8ff279367b83662a25ffabc351462036ac99,PodSandboxId:46e746be980ce8b6fca698a4468e34a7ef4bf5af83343924502c75db0de51f7f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,Runtime
Handler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1760894313545826383,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-046984,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e6ce92c92bde0baf1df2a20ff7c90fc3,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:662c4140aef8fdbd0e6b33fa27779f3406d9fdf0ace1ee302499af23c64cd12d,PodSandboxId:d1cb507aaf891f7d62474813d15801193144b757e9b15fa2435f2698350fb764,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c
3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1760894313492105297,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-046984,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1cad97fa3410eba4e391c0d82f1c5537,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ca460e4a893443d5e31ba0c33acef332ba3e98264273cae75f510d535dd8de4,PodSandboxId:928128e8edf7991a6e5d8dbf87518385425b952ae27fa3
bd4bc44a6921316608,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1760894298448998948,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-z9rqv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7655a35b-ffaf-424b-8a40-627a6a3e5b1e,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubern
etes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b4d75c20f540e595b1556128fd9bdec9bb473b7879f38e05c37ddb5be92d5533,PodSandboxId:e43710c0abb0489f94c5bfe85ccbfba17a8ccf06991d48dd6b57df1a8b676480,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_EXITED,CreatedAt:1760894297521253631,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mnsqf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bcef04ef-3072-4b46-becb-1e7804e25d88,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 1,io.
kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d94639566f241bd36a3beb47c8dc56cbc896b815c911ed276556b64c475ca4f6,PodSandboxId:a8b14d2d9fc70ac60f891c9219c3ebbe0770912b66d1fa1a77da8466a508a236,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_EXITED,CreatedAt:1760894297418101760,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-046984,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15b92579863fcd14c69b4cf471043b4b,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hos
tPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ee3dc4c66b7c4d68572ee374d8c9eb458e56d7c7883b1bfa63c9da942534f88a,PodSandboxId:dfe5467e0dd20a89b03b764663999847a3d4a010e6bcf4c0eb9d864fa6c077d1,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_EXITED,CreatedAt:1760894297383867226,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-046984,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e6ce92c92bde0baf1df2a20ff7c90fc3,},
Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:180dd09c14df72f6831b149aba1874f3461a6782e41b9c9f7b85c35a6f96b5a9,PodSandboxId:be6128732155f99a7e06eb4edf937366e57bbf6214a5e7e9e790163ca935d2d0,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_EXITED,CreatedAt:1760894297275087476,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-046984,io.kubernetes.pod.namespa
ce: kube-system,io.kubernetes.pod.uid: a3b2d4b76f920242b3eeb72a31f4a5b7,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4944d9dc741ea8a621669129b6109f09dd4b8b5e7258461f2d839fd79cc8b72b,PodSandboxId:3c2ec2999d1ad3a28704fa5bbbd2da11e667e41d3f099a47c8f6be26078c23b2,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_EXITED,CreatedAt:1760894297209510021,Labels:map[string]string{io.kubernetes.contai
ner.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-046984,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1cad97fa3410eba4e391c0d82f1c5537,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=fc56fc57-4ec7-476b-ab59-862ff785b6f3 name=/runtime.v1.RuntimeService/ListContainers
	Oct 19 17:18:54 pause-046984 crio[3072]: time="2025-10-19 17:18:54.313162079Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=d8613238-4567-449b-920b-dcc5737ec304 name=/runtime.v1.RuntimeService/Version
	Oct 19 17:18:54 pause-046984 crio[3072]: time="2025-10-19 17:18:54.313463353Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=d8613238-4567-449b-920b-dcc5737ec304 name=/runtime.v1.RuntimeService/Version
	Oct 19 17:18:54 pause-046984 crio[3072]: time="2025-10-19 17:18:54.315130269Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=a79c03b3-c5ad-4806-8636-d073714fa325 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 19 17:18:54 pause-046984 crio[3072]: time="2025-10-19 17:18:54.315897239Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1760894334315864899,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:127412,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a79c03b3-c5ad-4806-8636-d073714fa325 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 19 17:18:54 pause-046984 crio[3072]: time="2025-10-19 17:18:54.317009312Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ecbc21e5-765d-403e-b686-3f1271507ed8 name=/runtime.v1.RuntimeService/ListContainers
	Oct 19 17:18:54 pause-046984 crio[3072]: time="2025-10-19 17:18:54.317115052Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ecbc21e5-765d-403e-b686-3f1271507ed8 name=/runtime.v1.RuntimeService/ListContainers
	Oct 19 17:18:54 pause-046984 crio[3072]: time="2025-10-19 17:18:54.317507976Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0ff02ae97123ca4f73d1283f137ed862cb834f144462c564fcb415cd278d3b58,PodSandboxId:b96e941b999897db2610e09acf5adda1c456cbb72d35d1f8324de75c061f39a4,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1760894317488330888,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-z9rqv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7655a35b-ffaf-424b-8a40-627a6a3e5b1e,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\
":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:94f2efe6ef28c01e17e71f2cc08b9349261aa5fbce6d2218d30742353b1c38b7,PodSandboxId:e8ffffbd2e7e4dfb7739748d1e0d843b7e0ad47a6dbc08ef2b765d03ffaa3c91,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1760894317145523758,Labels:map[string]s
tring{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mnsqf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bcef04ef-3072-4b46-becb-1e7804e25d88,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f9bba59a447faf09e55815d1c234a15d16ee5af14510e96d8c6e9507cb394080,PodSandboxId:b0140c18f7525a5ef3e43b32377ee8328a2ebe041864828350777802d992fc9b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1760894313588306780,Labels:map[string]string{io.kubernetes.container
.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-046984,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15b92579863fcd14c69b4cf471043b4b,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8591c4aa8ae3f813a48e89256c784d44e6f9a5f1f2d52c969ac66cb87bbbbcfd,PodSandboxId:3356e8fb22fe4eb275eb832f2a5f0c2e91c375d4b96c2232c40dfface11fa0e3,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:C
ONTAINER_RUNNING,CreatedAt:1760894313593041404,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-046984,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a3b2d4b76f920242b3eeb72a31f4a5b7,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3d708e0655eeba34296a163b236c8ff279367b83662a25ffabc351462036ac99,PodSandboxId:46e746be980ce8b6fca698a4468e34a7ef4bf5af83343924502c75db0de51f7f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,Runtime
Handler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1760894313545826383,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-046984,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e6ce92c92bde0baf1df2a20ff7c90fc3,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:662c4140aef8fdbd0e6b33fa27779f3406d9fdf0ace1ee302499af23c64cd12d,PodSandboxId:d1cb507aaf891f7d62474813d15801193144b757e9b15fa2435f2698350fb764,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c
3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1760894313492105297,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-046984,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1cad97fa3410eba4e391c0d82f1c5537,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ca460e4a893443d5e31ba0c33acef332ba3e98264273cae75f510d535dd8de4,PodSandboxId:928128e8edf7991a6e5d8dbf87518385425b952ae27fa3
bd4bc44a6921316608,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1760894298448998948,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-z9rqv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7655a35b-ffaf-424b-8a40-627a6a3e5b1e,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubern
etes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b4d75c20f540e595b1556128fd9bdec9bb473b7879f38e05c37ddb5be92d5533,PodSandboxId:e43710c0abb0489f94c5bfe85ccbfba17a8ccf06991d48dd6b57df1a8b676480,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_EXITED,CreatedAt:1760894297521253631,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mnsqf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bcef04ef-3072-4b46-becb-1e7804e25d88,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 1,io.
kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d94639566f241bd36a3beb47c8dc56cbc896b815c911ed276556b64c475ca4f6,PodSandboxId:a8b14d2d9fc70ac60f891c9219c3ebbe0770912b66d1fa1a77da8466a508a236,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_EXITED,CreatedAt:1760894297418101760,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-046984,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15b92579863fcd14c69b4cf471043b4b,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hos
tPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ee3dc4c66b7c4d68572ee374d8c9eb458e56d7c7883b1bfa63c9da942534f88a,PodSandboxId:dfe5467e0dd20a89b03b764663999847a3d4a010e6bcf4c0eb9d864fa6c077d1,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_EXITED,CreatedAt:1760894297383867226,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-046984,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e6ce92c92bde0baf1df2a20ff7c90fc3,},
Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:180dd09c14df72f6831b149aba1874f3461a6782e41b9c9f7b85c35a6f96b5a9,PodSandboxId:be6128732155f99a7e06eb4edf937366e57bbf6214a5e7e9e790163ca935d2d0,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_EXITED,CreatedAt:1760894297275087476,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-046984,io.kubernetes.pod.namespa
ce: kube-system,io.kubernetes.pod.uid: a3b2d4b76f920242b3eeb72a31f4a5b7,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4944d9dc741ea8a621669129b6109f09dd4b8b5e7258461f2d839fd79cc8b72b,PodSandboxId:3c2ec2999d1ad3a28704fa5bbbd2da11e667e41d3f099a47c8f6be26078c23b2,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_EXITED,CreatedAt:1760894297209510021,Labels:map[string]string{io.kubernetes.contai
ner.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-046984,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1cad97fa3410eba4e391c0d82f1c5537,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ecbc21e5-765d-403e-b686-3f1271507ed8 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	0ff02ae97123c       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969   16 seconds ago      Running             coredns                   2                   b96e941b99989       coredns-66bc5c9577-z9rqv
	94f2efe6ef28c       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7   17 seconds ago      Running             kube-proxy                2                   e8ffffbd2e7e4       kube-proxy-mnsqf
	8591c4aa8ae3f       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115   20 seconds ago      Running             etcd                      2                   3356e8fb22fe4       etcd-pause-046984
	f9bba59a447fa       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813   20 seconds ago      Running             kube-scheduler            2                   b0140c18f7525       kube-scheduler-pause-046984
	3d708e0655eeb       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f   20 seconds ago      Running             kube-controller-manager   2                   46e746be980ce       kube-controller-manager-pause-046984
	662c4140aef8f       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97   20 seconds ago      Running             kube-apiserver            2                   d1cb507aaf891       kube-apiserver-pause-046984
	6ca460e4a8934       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969   35 seconds ago      Exited              coredns                   1                   928128e8edf79       coredns-66bc5c9577-z9rqv
	b4d75c20f540e       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7   36 seconds ago      Exited              kube-proxy                1                   e43710c0abb04       kube-proxy-mnsqf
	d94639566f241       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813   36 seconds ago      Exited              kube-scheduler            1                   a8b14d2d9fc70       kube-scheduler-pause-046984
	ee3dc4c66b7c4       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f   37 seconds ago      Exited              kube-controller-manager   1                   dfe5467e0dd20       kube-controller-manager-pause-046984
	180dd09c14df7       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115   37 seconds ago      Exited              etcd                      1                   be6128732155f       etcd-pause-046984
	4944d9dc741ea       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97   37 seconds ago      Exited              kube-apiserver            1                   3c2ec2999d1ad       kube-apiserver-pause-046984
	
	
	==> coredns [0ff02ae97123ca4f73d1283f137ed862cb834f144462c564fcb415cd278d3b58] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:39663 - 6314 "HINFO IN 27294165726923413.8170061678603372968. udp 55 false 512" NXDOMAIN qr,rd,ra 130 0.022674315s
	
	
	==> coredns [6ca460e4a893443d5e31ba0c33acef332ba3e98264273cae75f510d535dd8de4] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] 127.0.0.1:57460 - 64836 "HINFO IN 7306454367616908346.6183267044183509629. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.031499644s
	
	
	==> describe nodes <==
	Name:               pause-046984
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-046984
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e20882874ea1ac33516421f13ca0f7def6fb6b34
	                    minikube.k8s.io/name=pause-046984
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_19T17_17_06_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 19 Oct 2025 17:17:03 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-046984
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 19 Oct 2025 17:18:46 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 19 Oct 2025 17:18:36 +0000   Sun, 19 Oct 2025 17:17:01 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 19 Oct 2025 17:18:36 +0000   Sun, 19 Oct 2025 17:17:01 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 19 Oct 2025 17:18:36 +0000   Sun, 19 Oct 2025 17:17:01 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 19 Oct 2025 17:18:36 +0000   Sun, 19 Oct 2025 17:17:07 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.42
	  Hostname:    pause-046984
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3042712Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3042712Ki
	  pods:               110
	System Info:
	  Machine ID:                 07983801304e404a9288c3d4b9f00792
	  System UUID:                07983801-304e-404a-9288-c3d4b9f00792
	  Boot ID:                    bae3f1af-9538-430a-8f8b-084f8ef83f04
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-z9rqv                100m (5%)     0 (0%)      70Mi (2%)        170Mi (5%)     102s
	  kube-system                 etcd-pause-046984                       100m (5%)     0 (0%)      100Mi (3%)       0 (0%)         108s
	  kube-system                 kube-apiserver-pause-046984             250m (12%)    0 (0%)      0 (0%)           0 (0%)         110s
	  kube-system                 kube-controller-manager-pause-046984    200m (10%)    0 (0%)      0 (0%)           0 (0%)         108s
	  kube-system                 kube-proxy-mnsqf                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         102s
	  kube-system                 kube-scheduler-pause-046984             100m (5%)     0 (0%)      0 (0%)           0 (0%)         108s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (5%)  170Mi (5%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 100s               kube-proxy       
	  Normal  Starting                 17s                kube-proxy       
	  Normal  NodeHasSufficientPID     108s               kubelet          Node pause-046984 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  108s               kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  108s               kubelet          Node pause-046984 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    108s               kubelet          Node pause-046984 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 108s               kubelet          Starting kubelet.
	  Normal  NodeReady                107s               kubelet          Node pause-046984 status is now: NodeReady
	  Normal  RegisteredNode           103s               node-controller  Node pause-046984 event: Registered Node pause-046984 in Controller
	  Normal  Starting                 22s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  22s (x8 over 22s)  kubelet          Node pause-046984 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    22s (x8 over 22s)  kubelet          Node pause-046984 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     22s (x7 over 22s)  kubelet          Node pause-046984 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  22s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           15s                node-controller  Node pause-046984 event: Registered Node pause-046984 in Controller
	
	
	==> dmesg <==
	[Oct19 17:16] Booted with the nomodeset parameter. Only the system framebuffer will be available
	[  +0.000007] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.000055] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +0.003656] (rpcbind)[119]: rpcbind.service: Referenced but unset environment variable evaluates to an empty string: RPCBIND_OPTIONS
	[  +1.166458] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000019] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.080988] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.099696] kauditd_printk_skb: 102 callbacks suppressed
	[Oct19 17:17] kauditd_printk_skb: 171 callbacks suppressed
	[  +1.148744] kauditd_printk_skb: 18 callbacks suppressed
	[ +42.333161] kauditd_printk_skb: 184 callbacks suppressed
	[Oct19 17:18] kauditd_printk_skb: 275 callbacks suppressed
	[  +1.424770] kauditd_printk_skb: 185 callbacks suppressed
	[  +1.841357] kauditd_printk_skb: 98 callbacks suppressed
	
	
	==> etcd [180dd09c14df72f6831b149aba1874f3461a6782e41b9c9f7b85c35a6f96b5a9] <==
	{"level":"info","ts":"2025-10-19T17:18:18.905564Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.42:2379"}
	{"level":"warn","ts":"2025-10-19T17:18:18.960032Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39684","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:18:18.984634Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39704","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-19T17:18:18.991462Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-10-19T17:18:18.991532Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"pause-046984","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.42:2380"],"advertise-client-urls":["https://192.168.39.42:2379"]}
	{"level":"warn","ts":"2025-10-19T17:18:18.991673Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39664","server-name":"","error":"read tcp 127.0.0.1:2379->127.0.0.1:39664: use of closed network connection"}
	{"level":"warn","ts":"2025-10-19T17:18:18.991737Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39688","server-name":"","error":"read tcp 127.0.0.1:2379->127.0.0.1:39688: use of closed network connection"}
	2025/10/19 17:18:18 WARNING: [core] [Channel #2 SubChannel #5]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "error reading server preface: read tcp 127.0.0.1:39664->127.0.0.1:2379: read: connection reset by peer"
	{"level":"warn","ts":"2025-10-19T17:18:18.997704Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39720","server-name":"","error":"write tcp 127.0.0.1:2379->127.0.0.1:39720: use of closed network connection"}
	{"level":"error","ts":"2025-10-19T17:18:18.999852Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-10-19T17:18:19.002002Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-10-19T17:18:19.002094Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-19T17:18:19.002115Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"be5e8f7004ae306c","current-leader-member-id":"be5e8f7004ae306c"}
	{"level":"warn","ts":"2025-10-19T17:18:19.002149Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-10-19T17:18:19.002204Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-10-19T17:18:19.002212Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-19T17:18:19.002213Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"info","ts":"2025-10-19T17:18:19.002233Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"warn","ts":"2025-10-19T17:18:19.002264Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.42:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-10-19T17:18:19.002273Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.42:2379: use of closed network connection"}
	{"level":"error","ts":"2025-10-19T17:18:19.002279Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.39.42:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-19T17:18:19.005375Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.39.42:2380"}
	{"level":"error","ts":"2025-10-19T17:18:19.005440Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.39.42:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-19T17:18:19.005477Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.39.42:2380"}
	{"level":"info","ts":"2025-10-19T17:18:19.005493Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"pause-046984","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.42:2380"],"advertise-client-urls":["https://192.168.39.42:2379"]}
	
	
	==> etcd [8591c4aa8ae3f813a48e89256c784d44e6f9a5f1f2d52c969ac66cb87bbbbcfd] <==
	{"level":"warn","ts":"2025-10-19T17:18:35.116803Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37080","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:18:35.159340Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37088","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:18:35.173608Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37100","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:18:35.197131Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37116","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:18:35.221659Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37130","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:18:35.246575Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37156","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:18:35.255561Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37164","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:18:35.264533Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37180","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:18:35.274704Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37194","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:18:35.292907Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37200","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:18:35.317213Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37220","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:18:35.329085Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37246","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:18:35.334636Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37274","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:18:35.344467Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37284","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:18:35.365156Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37306","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:18:35.377406Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37328","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:18:35.386173Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37344","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:18:35.398007Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37370","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:18:35.422402Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37380","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:18:35.429647Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37406","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:18:35.445368Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37420","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:18:35.462202Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37436","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:18:35.475835Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37444","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:18:35.490964Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37458","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-19T17:18:35.586083Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37486","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 17:18:54 up 2 min,  0 users,  load average: 1.29, 0.44, 0.16
	Linux pause-046984 6.6.95 #1 SMP PREEMPT_DYNAMIC Thu Oct 16 13:22:30 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [4944d9dc741ea8a621669129b6109f09dd4b8b5e7258461f2d839fd79cc8b72b] <==
	I1019 17:18:17.674383       1 options.go:263] external host was not specified, using 192.168.39.42
	I1019 17:18:17.685587       1 server.go:150] Version: v1.34.1
	I1019 17:18:17.686506       1 server.go:152] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	W1019 17:18:18.838318       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=resource.k8s.io/v1alpha3
	W1019 17:18:18.839493       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=admissionregistration.k8s.io/v1alpha1
	W1019 17:18:18.839550       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=internal.apiserver.k8s.io/v1alpha1
	W1019 17:18:18.839568       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=coordination.k8s.io/v1alpha2
	W1019 17:18:18.839583       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=scheduling.k8s.io/v1alpha1
	W1019 17:18:18.839597       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=authentication.k8s.io/v1alpha1
	W1019 17:18:18.839610       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=imagepolicy.k8s.io/v1alpha1
	W1019 17:18:18.839624       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=storage.k8s.io/v1alpha1
	W1019 17:18:18.839638       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=rbac.authorization.k8s.io/v1alpha1
	W1019 17:18:18.839652       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=storagemigration.k8s.io/v1alpha1
	W1019 17:18:18.839665       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=certificates.k8s.io/v1alpha1
	W1019 17:18:18.839679       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=node.k8s.io/v1alpha1
	W1019 17:18:18.948563       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1019 17:18:18.959533       1 logging.go:55] [core] [Channel #2 SubChannel #5]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	I1019 17:18:18.960571       1 shared_informer.go:349] "Waiting for caches to sync" controller="node_authorizer"
	
	
	==> kube-apiserver [662c4140aef8fdbd0e6b33fa27779f3406d9fdf0ace1ee302499af23c64cd12d] <==
	I1019 17:18:36.452166       1 aggregator.go:171] initial CRD sync complete...
	I1019 17:18:36.452764       1 autoregister_controller.go:144] Starting autoregister controller
	I1019 17:18:36.452818       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1019 17:18:36.452844       1 cache.go:39] Caches are synced for autoregister controller
	I1019 17:18:36.465003       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1019 17:18:36.465065       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1019 17:18:36.465099       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1019 17:18:36.465116       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1019 17:18:36.465317       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1019 17:18:36.465325       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1019 17:18:36.465399       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1019 17:18:36.481108       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1019 17:18:36.487031       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1019 17:18:36.487431       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1019 17:18:36.498246       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1019 17:18:36.505929       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1019 17:18:36.797984       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1019 17:18:37.286455       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1019 17:18:37.987790       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1019 17:18:38.033019       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1019 17:18:38.068132       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1019 17:18:38.077320       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1019 17:18:40.093471       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1019 17:18:40.143380       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1019 17:18:40.189879       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [3d708e0655eeba34296a163b236c8ff279367b83662a25ffabc351462036ac99] <==
	I1019 17:18:39.809443       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1019 17:18:39.810641       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1019 17:18:39.811816       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1019 17:18:39.811887       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1019 17:18:39.815109       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1019 17:18:39.817376       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1019 17:18:39.817409       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1019 17:18:39.821697       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1019 17:18:39.824227       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1019 17:18:39.825573       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1019 17:18:39.825676       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1019 17:18:39.833144       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1019 17:18:39.835451       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1019 17:18:39.835703       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1019 17:18:39.835478       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1019 17:18:39.835773       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1019 17:18:39.835904       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1019 17:18:39.835984       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1019 17:18:39.836068       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="pause-046984"
	I1019 17:18:39.836113       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1019 17:18:39.836503       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1019 17:18:39.842584       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1019 17:18:39.846930       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1019 17:18:39.849160       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1019 17:18:39.858446       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	
	
	==> kube-controller-manager [ee3dc4c66b7c4d68572ee374d8c9eb458e56d7c7883b1bfa63c9da942534f88a] <==
	
	
	==> kube-proxy [94f2efe6ef28c01e17e71f2cc08b9349261aa5fbce6d2218d30742353b1c38b7] <==
	I1019 17:18:37.406801       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1019 17:18:37.508251       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1019 17:18:37.508397       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.42"]
	E1019 17:18:37.508629       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1019 17:18:37.583501       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1019 17:18:37.583560       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1019 17:18:37.583581       1 server_linux.go:132] "Using iptables Proxier"
	I1019 17:18:37.608482       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1019 17:18:37.608706       1 server.go:527] "Version info" version="v1.34.1"
	I1019 17:18:37.609915       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1019 17:18:37.622227       1 config.go:200] "Starting service config controller"
	I1019 17:18:37.622264       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1019 17:18:37.622285       1 config.go:106] "Starting endpoint slice config controller"
	I1019 17:18:37.622290       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1019 17:18:37.622307       1 config.go:403] "Starting serviceCIDR config controller"
	I1019 17:18:37.622313       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1019 17:18:37.623369       1 config.go:309] "Starting node config controller"
	I1019 17:18:37.623743       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1019 17:18:37.722802       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1019 17:18:37.722934       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1019 17:18:37.723248       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1019 17:18:37.724676       1 shared_informer.go:356] "Caches are synced" controller="node config"
	
	
	==> kube-proxy [b4d75c20f540e595b1556128fd9bdec9bb473b7879f38e05c37ddb5be92d5533] <==
	
	
	==> kube-scheduler [d94639566f241bd36a3beb47c8dc56cbc896b815c911ed276556b64c475ca4f6] <==
	
	
	==> kube-scheduler [f9bba59a447faf09e55815d1c234a15d16ee5af14510e96d8c6e9507cb394080] <==
	I1019 17:18:34.997116       1 serving.go:386] Generated self-signed cert in-memory
	I1019 17:18:36.824642       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1019 17:18:36.824673       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1019 17:18:36.831405       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1019 17:18:36.831428       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1019 17:18:36.831559       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1019 17:18:36.831673       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1019 17:18:36.831706       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1019 17:18:36.831782       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1019 17:18:36.833042       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1019 17:18:36.833140       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1019 17:18:36.931700       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1019 17:18:36.931896       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1019 17:18:36.931993       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 19 17:18:36 pause-046984 kubelet[3534]: I1019 17:18:36.526964    3534 kubelet_node_status.go:124] "Node was previously registered" node="pause-046984"
	Oct 19 17:18:36 pause-046984 kubelet[3534]: I1019 17:18:36.527062    3534 kubelet_node_status.go:78] "Successfully registered node" node="pause-046984"
	Oct 19 17:18:36 pause-046984 kubelet[3534]: I1019 17:18:36.527084    3534 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Oct 19 17:18:36 pause-046984 kubelet[3534]: I1019 17:18:36.527912    3534 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Oct 19 17:18:36 pause-046984 kubelet[3534]: E1019 17:18:36.547059    3534 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-pause-046984\" already exists" pod="kube-system/kube-controller-manager-pause-046984"
	Oct 19 17:18:36 pause-046984 kubelet[3534]: I1019 17:18:36.547110    3534 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-pause-046984"
	Oct 19 17:18:36 pause-046984 kubelet[3534]: E1019 17:18:36.557946    3534 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-pause-046984\" already exists" pod="kube-system/kube-scheduler-pause-046984"
	Oct 19 17:18:36 pause-046984 kubelet[3534]: I1019 17:18:36.557987    3534 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/etcd-pause-046984"
	Oct 19 17:18:36 pause-046984 kubelet[3534]: E1019 17:18:36.574803    3534 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-pause-046984\" already exists" pod="kube-system/etcd-pause-046984"
	Oct 19 17:18:36 pause-046984 kubelet[3534]: I1019 17:18:36.574832    3534 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-pause-046984"
	Oct 19 17:18:36 pause-046984 kubelet[3534]: E1019 17:18:36.585895    3534 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-pause-046984\" already exists" pod="kube-system/kube-apiserver-pause-046984"
	Oct 19 17:18:36 pause-046984 kubelet[3534]: I1019 17:18:36.681281    3534 apiserver.go:52] "Watching apiserver"
	Oct 19 17:18:36 pause-046984 kubelet[3534]: I1019 17:18:36.704931    3534 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Oct 19 17:18:36 pause-046984 kubelet[3534]: I1019 17:18:36.793588    3534 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/bcef04ef-3072-4b46-becb-1e7804e25d88-xtables-lock\") pod \"kube-proxy-mnsqf\" (UID: \"bcef04ef-3072-4b46-becb-1e7804e25d88\") " pod="kube-system/kube-proxy-mnsqf"
	Oct 19 17:18:36 pause-046984 kubelet[3534]: I1019 17:18:36.793700    3534 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bcef04ef-3072-4b46-becb-1e7804e25d88-lib-modules\") pod \"kube-proxy-mnsqf\" (UID: \"bcef04ef-3072-4b46-becb-1e7804e25d88\") " pod="kube-system/kube-proxy-mnsqf"
	Oct 19 17:18:37 pause-046984 kubelet[3534]: I1019 17:18:37.188430    3534 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-pause-046984"
	Oct 19 17:18:37 pause-046984 kubelet[3534]: I1019 17:18:37.190400    3534 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-pause-046984"
	Oct 19 17:18:37 pause-046984 kubelet[3534]: E1019 17:18:37.213926    3534 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-pause-046984\" already exists" pod="kube-system/kube-scheduler-pause-046984"
	Oct 19 17:18:37 pause-046984 kubelet[3534]: E1019 17:18:37.221123    3534 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-pause-046984\" already exists" pod="kube-system/kube-apiserver-pause-046984"
	Oct 19 17:18:39 pause-046984 kubelet[3534]: I1019 17:18:39.224767    3534 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Oct 19 17:18:42 pause-046984 kubelet[3534]: E1019 17:18:42.853552    3534 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1760894322852373187  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:127412}  inodes_used:{value:57}}"
	Oct 19 17:18:42 pause-046984 kubelet[3534]: E1019 17:18:42.853584    3534 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1760894322852373187  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:127412}  inodes_used:{value:57}}"
	Oct 19 17:18:46 pause-046984 kubelet[3534]: I1019 17:18:46.806107    3534 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Oct 19 17:18:52 pause-046984 kubelet[3534]: E1019 17:18:52.855000    3534 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1760894332854520008  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:127412}  inodes_used:{value:57}}"
	Oct 19 17:18:52 pause-046984 kubelet[3534]: E1019 17:18:52.855053    3534 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1760894332854520008  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:127412}  inodes_used:{value:57}}"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-046984 -n pause-046984
helpers_test.go:269: (dbg) Run:  kubectl --context pause-046984 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/SecondStartNoReconfiguration (58.73s)

                                                
                                    

Test pass (280/324)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 25.8
4 TestDownloadOnly/v1.28.0/preload-exists 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.06
9 TestDownloadOnly/v1.28.0/DeleteAll 0.15
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.13
12 TestDownloadOnly/v1.34.1/json-events 12.94
13 TestDownloadOnly/v1.34.1/preload-exists 0
17 TestDownloadOnly/v1.34.1/LogsDuration 0.06
18 TestDownloadOnly/v1.34.1/DeleteAll 0.14
19 TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds 0.13
21 TestBinaryMirror 0.66
22 TestOffline 117.77
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.07
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.07
27 TestAddons/Setup 195.13
31 TestAddons/serial/GCPAuth/Namespaces 0.15
32 TestAddons/serial/GCPAuth/FakeCredentials 10.51
35 TestAddons/parallel/Registry 16.89
36 TestAddons/parallel/RegistryCreds 0.66
38 TestAddons/parallel/InspektorGadget 6.31
39 TestAddons/parallel/MetricsServer 5.76
41 TestAddons/parallel/CSI 60.13
42 TestAddons/parallel/Headlamp 18.98
43 TestAddons/parallel/CloudSpanner 7.09
44 TestAddons/parallel/LocalPath 20.14
45 TestAddons/parallel/NvidiaDevicePlugin 6.56
46 TestAddons/parallel/Yakd 12.61
48 TestAddons/StoppedEnableDisable 86.45
49 TestCertOptions 42
50 TestCertExpiration 366.61
52 TestForceSystemdFlag 52.21
53 TestForceSystemdEnv 58.95
55 TestKVMDriverInstallOrUpdate 1.21
59 TestErrorSpam/setup 36.61
60 TestErrorSpam/start 0.35
61 TestErrorSpam/status 0.77
62 TestErrorSpam/pause 1.65
63 TestErrorSpam/unpause 1.79
64 TestErrorSpam/stop 77.04
67 TestFunctional/serial/CopySyncFile 0
68 TestFunctional/serial/StartWithProxy 49.93
69 TestFunctional/serial/AuditLog 0
70 TestFunctional/serial/SoftStart 31
71 TestFunctional/serial/KubeContext 0.05
72 TestFunctional/serial/KubectlGetPods 0.08
75 TestFunctional/serial/CacheCmd/cache/add_remote 3.49
76 TestFunctional/serial/CacheCmd/cache/add_local 2.25
77 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.05
78 TestFunctional/serial/CacheCmd/cache/list 0.05
79 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.22
80 TestFunctional/serial/CacheCmd/cache/cache_reload 1.65
81 TestFunctional/serial/CacheCmd/cache/delete 0.1
82 TestFunctional/serial/MinikubeKubectlCmd 0.11
83 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.11
84 TestFunctional/serial/ExtraConfig 34.89
85 TestFunctional/serial/ComponentHealth 0.07
86 TestFunctional/serial/LogsCmd 1.38
87 TestFunctional/serial/LogsFileCmd 1.42
88 TestFunctional/serial/InvalidService 4.23
90 TestFunctional/parallel/ConfigCmd 0.34
91 TestFunctional/parallel/DashboardCmd 14.89
92 TestFunctional/parallel/DryRun 0.31
93 TestFunctional/parallel/InternationalLanguage 0.14
94 TestFunctional/parallel/StatusCmd 0.8
98 TestFunctional/parallel/ServiceCmdConnect 21.6
99 TestFunctional/parallel/AddonsCmd 0.13
100 TestFunctional/parallel/PersistentVolumeClaim 44.83
102 TestFunctional/parallel/SSHCmd 0.38
103 TestFunctional/parallel/CpCmd 1.28
104 TestFunctional/parallel/MySQL 23.26
105 TestFunctional/parallel/FileSync 0.2
106 TestFunctional/parallel/CertSync 1.16
110 TestFunctional/parallel/NodeLabels 0.06
112 TestFunctional/parallel/NonActiveRuntimeDisabled 0.43
114 TestFunctional/parallel/License 0.45
115 TestFunctional/parallel/ServiceCmd/DeployApp 9.18
116 TestFunctional/parallel/Version/short 0.05
117 TestFunctional/parallel/Version/components 0.45
118 TestFunctional/parallel/ImageCommands/ImageListShort 0.23
119 TestFunctional/parallel/ImageCommands/ImageListTable 0.22
120 TestFunctional/parallel/ImageCommands/ImageListJson 0.22
121 TestFunctional/parallel/ImageCommands/ImageListYaml 0.23
122 TestFunctional/parallel/ImageCommands/ImageBuild 5.45
123 TestFunctional/parallel/ImageCommands/Setup 1.95
124 TestFunctional/parallel/UpdateContextCmd/no_changes 0.1
125 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.1
126 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.09
127 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.25
128 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.86
129 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 2.28
130 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.73
132 TestFunctional/parallel/ServiceCmd/List 0.3
133 TestFunctional/parallel/ServiceCmd/JSONOutput 0.31
134 TestFunctional/parallel/ServiceCmd/HTTPS 0.34
135 TestFunctional/parallel/ServiceCmd/Format 0.32
136 TestFunctional/parallel/ServiceCmd/URL 0.35
137 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 4.92
138 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.62
139 TestFunctional/parallel/ProfileCmd/profile_not_create 0.35
140 TestFunctional/parallel/ProfileCmd/profile_list 0.33
141 TestFunctional/parallel/ProfileCmd/profile_json_output 0.46
142 TestFunctional/parallel/MountCmd/any-port 12.56
143 TestFunctional/parallel/MountCmd/specific-port 1.65
144 TestFunctional/parallel/MountCmd/VerifyCleanup 1.23
154 TestFunctional/delete_echo-server_images 0.04
155 TestFunctional/delete_my-image_image 0.02
156 TestFunctional/delete_minikube_cached_images 0.02
161 TestMultiControlPlane/serial/StartCluster 206.54
162 TestMultiControlPlane/serial/DeployApp 6.8
163 TestMultiControlPlane/serial/PingHostFromPods 1.17
164 TestMultiControlPlane/serial/AddWorkerNode 49.66
165 TestMultiControlPlane/serial/NodeLabels 0.07
166 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.86
167 TestMultiControlPlane/serial/CopyFile 12.93
168 TestMultiControlPlane/serial/StopSecondaryNode 90.46
169 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.65
170 TestMultiControlPlane/serial/RestartSecondaryNode 33.1
171 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 1.12
172 TestMultiControlPlane/serial/RestartClusterKeepsNodes 376.78
173 TestMultiControlPlane/serial/DeleteSecondaryNode 18.56
174 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.63
175 TestMultiControlPlane/serial/StopCluster 251.84
176 TestMultiControlPlane/serial/RestartCluster 98.6
177 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.62
178 TestMultiControlPlane/serial/AddSecondaryNode 80.6
179 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.86
183 TestJSONOutput/start/Command 79.21
184 TestJSONOutput/start/Audit 0
186 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
187 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
189 TestJSONOutput/pause/Command 0.71
190 TestJSONOutput/pause/Audit 0
192 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
193 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
195 TestJSONOutput/unpause/Command 0.63
196 TestJSONOutput/unpause/Audit 0
198 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
199 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
201 TestJSONOutput/stop/Command 6.88
202 TestJSONOutput/stop/Audit 0
204 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
205 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
206 TestErrorJSONOutput 0.2
211 TestMainNoArgs 0.05
212 TestMinikubeProfile 78.56
215 TestMountStart/serial/StartWithMountFirst 20.13
216 TestMountStart/serial/VerifyMountFirst 0.38
217 TestMountStart/serial/StartWithMountSecond 20.78
218 TestMountStart/serial/VerifyMountSecond 0.38
219 TestMountStart/serial/DeleteFirst 0.73
220 TestMountStart/serial/VerifyMountPostDelete 0.38
221 TestMountStart/serial/Stop 1.27
222 TestMountStart/serial/RestartStopped 19.63
223 TestMountStart/serial/VerifyMountPostStop 0.38
226 TestMultiNode/serial/FreshStart2Nodes 126.75
227 TestMultiNode/serial/DeployApp2Nodes 5.45
228 TestMultiNode/serial/PingHostFrom2Pods 0.77
229 TestMultiNode/serial/AddNode 41.44
230 TestMultiNode/serial/MultiNodeLabels 0.06
231 TestMultiNode/serial/ProfileList 0.57
232 TestMultiNode/serial/CopyFile 7.19
233 TestMultiNode/serial/StopNode 2.46
234 TestMultiNode/serial/StartAfterStop 37.21
235 TestMultiNode/serial/RestartKeepsNodes 303.38
236 TestMultiNode/serial/DeleteNode 2.74
237 TestMultiNode/serial/StopMultiNode 162.97
238 TestMultiNode/serial/RestartMultiNode 86.76
239 TestMultiNode/serial/ValidateNameConflict 39.1
246 TestScheduledStopUnix 107.09
250 TestRunningBinaryUpgrade 77.01
252 TestKubernetesUpgrade 154.55
255 TestPause/serial/Start 87.43
257 TestStoppedBinaryUpgrade/Setup 3.06
258 TestStoppedBinaryUpgrade/Upgrade 105.95
266 TestStoppedBinaryUpgrade/MinikubeLogs 1.02
274 TestNetworkPlugins/group/false 3.61
279 TestNoKubernetes/serial/StartNoK8sWithVersion 0.07
280 TestNoKubernetes/serial/StartWithK8s 51.9
282 TestStartStop/group/old-k8s-version/serial/FirstStart 109.21
284 TestStartStop/group/no-preload/serial/FirstStart 113.58
285 TestNoKubernetes/serial/StartWithStopK8s 35.63
286 TestNoKubernetes/serial/Start 25
287 TestNoKubernetes/serial/VerifyK8sNotRunning 0.22
288 TestNoKubernetes/serial/ProfileList 12.24
289 TestStartStop/group/old-k8s-version/serial/DeployApp 11.37
290 TestNoKubernetes/serial/Stop 1.31
292 TestStartStop/group/embed-certs/serial/FirstStart 82.46
293 TestNoKubernetes/serial/StartNoArgs 41.08
294 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.86
295 TestStartStop/group/old-k8s-version/serial/Stop 88.14
296 TestStartStop/group/no-preload/serial/DeployApp 10.38
297 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.11
298 TestStartStop/group/no-preload/serial/Stop 86.51
299 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.2
301 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 57.01
302 TestStartStop/group/embed-certs/serial/DeployApp 11.27
303 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.96
304 TestStartStop/group/embed-certs/serial/Stop 87.83
305 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 10.29
306 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.19
307 TestStartStop/group/old-k8s-version/serial/SecondStart 44.9
308 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.98
309 TestStartStop/group/default-k8s-diff-port/serial/Stop 84.12
310 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.23
311 TestStartStop/group/no-preload/serial/SecondStart 61.72
312 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 14.03
313 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.1
314 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.24
315 TestStartStop/group/old-k8s-version/serial/Pause 3.06
317 TestStartStop/group/newest-cni/serial/FirstStart 47.77
318 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.34
319 TestStartStop/group/embed-certs/serial/SecondStart 53.21
320 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6.01
321 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.11
322 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.26
323 TestStartStop/group/no-preload/serial/Pause 3.94
324 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.22
325 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 55.8
326 TestNetworkPlugins/group/auto/Start 114.61
327 TestStartStop/group/newest-cni/serial/DeployApp 0
328 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.36
329 TestStartStop/group/newest-cni/serial/Stop 8.81
330 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.23
331 TestStartStop/group/newest-cni/serial/SecondStart 56.71
332 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 12.01
333 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.1
334 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 20.01
335 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.28
336 TestStartStop/group/embed-certs/serial/Pause 4.12
337 TestNetworkPlugins/group/kindnet/Start 73.45
338 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.11
339 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.28
340 TestStartStop/group/default-k8s-diff-port/serial/Pause 3.4
341 TestNetworkPlugins/group/calico/Start 79.9
342 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
343 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
344 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.23
345 TestStartStop/group/newest-cni/serial/Pause 2.61
346 TestNetworkPlugins/group/custom-flannel/Start 98.02
347 TestNetworkPlugins/group/auto/KubeletFlags 0.24
348 TestNetworkPlugins/group/auto/NetCatPod 12.26
349 TestNetworkPlugins/group/auto/DNS 0.15
350 TestNetworkPlugins/group/auto/Localhost 0.12
351 TestNetworkPlugins/group/auto/HairPin 0.14
352 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
353 TestNetworkPlugins/group/kindnet/KubeletFlags 0.51
354 TestNetworkPlugins/group/kindnet/NetCatPod 11.82
355 TestNetworkPlugins/group/enable-default-cni/Start 84.91
356 TestNetworkPlugins/group/kindnet/DNS 0.19
357 TestNetworkPlugins/group/kindnet/Localhost 0.17
358 TestNetworkPlugins/group/kindnet/HairPin 0.25
359 TestNetworkPlugins/group/calico/ControllerPod 6.03
360 TestNetworkPlugins/group/calico/KubeletFlags 0.28
361 TestNetworkPlugins/group/calico/NetCatPod 10.34
362 TestNetworkPlugins/group/flannel/Start 73.88
363 TestNetworkPlugins/group/calico/DNS 0.17
364 TestNetworkPlugins/group/calico/Localhost 0.17
365 TestNetworkPlugins/group/calico/HairPin 0.16
366 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.27
367 TestNetworkPlugins/group/custom-flannel/NetCatPod 12.28
368 TestNetworkPlugins/group/bridge/Start 82.58
369 TestNetworkPlugins/group/custom-flannel/DNS 0.15
370 TestNetworkPlugins/group/custom-flannel/Localhost 0.14
371 TestNetworkPlugins/group/custom-flannel/HairPin 0.15
372 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.23
373 TestNetworkPlugins/group/enable-default-cni/NetCatPod 10.25
374 TestNetworkPlugins/group/enable-default-cni/DNS 0.16
375 TestNetworkPlugins/group/enable-default-cni/Localhost 0.13
376 TestNetworkPlugins/group/enable-default-cni/HairPin 0.14
377 TestNetworkPlugins/group/flannel/ControllerPod 6.01
378 TestNetworkPlugins/group/flannel/KubeletFlags 0.23
379 TestNetworkPlugins/group/flannel/NetCatPod 11.24
380 TestNetworkPlugins/group/flannel/DNS 0.14
381 TestNetworkPlugins/group/flannel/Localhost 0.12
382 TestNetworkPlugins/group/flannel/HairPin 0.11
383 TestNetworkPlugins/group/bridge/KubeletFlags 0.21
384 TestNetworkPlugins/group/bridge/NetCatPod 10.25
385 TestNetworkPlugins/group/bridge/DNS 0.14
386 TestNetworkPlugins/group/bridge/Localhost 0.11
387 TestNetworkPlugins/group/bridge/HairPin 0.12
x
+
TestDownloadOnly/v1.28.0/json-events (25.8s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-817880 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-817880 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (25.793349936s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (25.80s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I1019 16:21:28.518251  278280 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
I1019 16:21:28.518386  278280 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21683-274250/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-817880
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-817880: exit status 85 (61.850374ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                                                ARGS                                                                                                 │       PROFILE        │  USER   │ VERSION │     START TIME      │ END TIME │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-817880 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio --auto-update-drivers=false │ download-only-817880 │ jenkins │ v1.37.0 │ 19 Oct 25 16:21 UTC │          │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/19 16:21:02
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1019 16:21:02.765478  278291 out.go:360] Setting OutFile to fd 1 ...
	I1019 16:21:02.765714  278291 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 16:21:02.765722  278291 out.go:374] Setting ErrFile to fd 2...
	I1019 16:21:02.765726  278291 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 16:21:02.765918  278291 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21683-274250/.minikube/bin
	W1019 16:21:02.766058  278291 root.go:314] Error reading config file at /home/jenkins/minikube-integration/21683-274250/.minikube/config/config.json: open /home/jenkins/minikube-integration/21683-274250/.minikube/config/config.json: no such file or directory
	I1019 16:21:02.766516  278291 out.go:368] Setting JSON to true
	I1019 16:21:02.768036  278291 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":7405,"bootTime":1760883458,"procs":191,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1019 16:21:02.768127  278291 start.go:143] virtualization: kvm guest
	I1019 16:21:02.770386  278291 out.go:99] [download-only-817880] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	W1019 16:21:02.770538  278291 preload.go:349] Failed to list preload files: open /home/jenkins/minikube-integration/21683-274250/.minikube/cache/preloaded-tarball: no such file or directory
	I1019 16:21:02.770557  278291 notify.go:221] Checking for updates...
	I1019 16:21:02.771555  278291 out.go:171] MINIKUBE_LOCATION=21683
	I1019 16:21:02.772643  278291 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1019 16:21:02.773855  278291 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21683-274250/kubeconfig
	I1019 16:21:02.775011  278291 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21683-274250/.minikube
	I1019 16:21:02.776058  278291 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1019 16:21:02.777974  278291 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1019 16:21:02.778214  278291 driver.go:422] Setting default libvirt URI to qemu:///system
	I1019 16:21:02.811284  278291 out.go:99] Using the kvm2 driver based on user configuration
	I1019 16:21:02.811319  278291 start.go:309] selected driver: kvm2
	I1019 16:21:02.811326  278291 start.go:930] validating driver "kvm2" against <nil>
	I1019 16:21:02.811631  278291 install.go:66] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1019 16:21:02.811718  278291 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21683-274250/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1019 16:21:02.825943  278291 install.go:163] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I1019 16:21:02.825995  278291 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21683-274250/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1019 16:21:02.839026  278291 install.go:163] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I1019 16:21:02.839070  278291 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1019 16:21:02.839541  278291 start_flags.go:410] Using suggested 6144MB memory alloc based on sys=32093MB, container=0MB
	I1019 16:21:02.839697  278291 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1019 16:21:02.839732  278291 cni.go:84] Creating CNI manager for ""
	I1019 16:21:02.839793  278291 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1019 16:21:02.839802  278291 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1019 16:21:02.839846  278291 start.go:353] cluster config:
	{Name:download-only-817880 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:6144 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:download-only-817880 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1019 16:21:02.840043  278291 iso.go:125] acquiring lock: {Name:mk7c0069e2cf0a68d4955dec96c59ff341a488dd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1019 16:21:02.841715  278291 out.go:99] Downloading VM boot image ...
	I1019 16:21:02.841769  278291 download.go:108] Downloading: https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso.sha256 -> /home/jenkins/minikube-integration/21683-274250/.minikube/cache/iso/amd64/minikube-v1.37.0-1760609724-21757-amd64.iso
	I1019 16:21:14.131800  278291 out.go:99] Starting "download-only-817880" primary control-plane node in "download-only-817880" cluster
	I1019 16:21:14.131829  278291 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1019 16:21:14.238159  278291 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
	I1019 16:21:14.238200  278291 cache.go:59] Caching tarball of preloaded images
	I1019 16:21:14.238371  278291 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1019 16:21:14.240174  278291 out.go:99] Downloading Kubernetes v1.28.0 preload ...
	I1019 16:21:14.240201  278291 preload.go:313] getting checksum for preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4 from gcs api...
	I1019 16:21:14.354578  278291 preload.go:290] Got checksum from GCS API "72bc7f8573f574c02d8c9a9b3496176b"
	I1019 16:21:14.354699  278291 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:72bc7f8573f574c02d8c9a9b3496176b -> /home/jenkins/minikube-integration/21683-274250/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
	I1019 16:21:26.995959  278291 cache.go:62] Finished verifying existence of preloaded tar for v1.28.0 on crio
	I1019 16:21:26.996403  278291 profile.go:143] Saving config to /home/jenkins/minikube-integration/21683-274250/.minikube/profiles/download-only-817880/config.json ...
	I1019 16:21:26.996440  278291 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21683-274250/.minikube/profiles/download-only-817880/config.json: {Name:mk8a777e3e5c66a2c75d750bf072a7770dd3ba49 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1019 16:21:26.997134  278291 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1019 16:21:26.997370  278291 download.go:108] Downloading: https://dl.k8s.io/release/v1.28.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.28.0/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/21683-274250/.minikube/cache/linux/amd64/v1.28.0/kubectl
	
	
	* The control-plane node download-only-817880 host does not exist
	  To start a cluster, run: "minikube start -p download-only-817880"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (0.15s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-817880
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/json-events (12.94s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-014353 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-014353 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (12.940522297s)
--- PASS: TestDownloadOnly/v1.34.1/json-events (12.94s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/preload-exists
I1019 16:21:41.806002  278280 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
I1019 16:21:41.806047  278280 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21683-274250/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.34.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-014353
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-014353: exit status 85 (63.197532ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                ARGS                                                                                                 │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-817880 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio --auto-update-drivers=false │ download-only-817880 │ jenkins │ v1.37.0 │ 19 Oct 25 16:21 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                               │ minikube             │ jenkins │ v1.37.0 │ 19 Oct 25 16:21 UTC │ 19 Oct 25 16:21 UTC │
	│ delete  │ -p download-only-817880                                                                                                                                                                             │ download-only-817880 │ jenkins │ v1.37.0 │ 19 Oct 25 16:21 UTC │ 19 Oct 25 16:21 UTC │
	│ start   │ -o=json --download-only -p download-only-014353 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=kvm2  --container-runtime=crio --auto-update-drivers=false │ download-only-014353 │ jenkins │ v1.37.0 │ 19 Oct 25 16:21 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/19 16:21:28
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1019 16:21:28.906837  278562 out.go:360] Setting OutFile to fd 1 ...
	I1019 16:21:28.907150  278562 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 16:21:28.907161  278562 out.go:374] Setting ErrFile to fd 2...
	I1019 16:21:28.907166  278562 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 16:21:28.907340  278562 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21683-274250/.minikube/bin
	I1019 16:21:28.907810  278562 out.go:368] Setting JSON to true
	I1019 16:21:28.908696  278562 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":7431,"bootTime":1760883458,"procs":185,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1019 16:21:28.908780  278562 start.go:143] virtualization: kvm guest
	I1019 16:21:28.910524  278562 out.go:99] [download-only-014353] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1019 16:21:28.910663  278562 notify.go:221] Checking for updates...
	I1019 16:21:28.912071  278562 out.go:171] MINIKUBE_LOCATION=21683
	I1019 16:21:28.913210  278562 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1019 16:21:28.914357  278562 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21683-274250/kubeconfig
	I1019 16:21:28.915436  278562 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21683-274250/.minikube
	I1019 16:21:28.919422  278562 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1019 16:21:28.921346  278562 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1019 16:21:28.921573  278562 driver.go:422] Setting default libvirt URI to qemu:///system
	I1019 16:21:28.952632  278562 out.go:99] Using the kvm2 driver based on user configuration
	I1019 16:21:28.952668  278562 start.go:309] selected driver: kvm2
	I1019 16:21:28.952677  278562 start.go:930] validating driver "kvm2" against <nil>
	I1019 16:21:28.953022  278562 install.go:66] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1019 16:21:28.953116  278562 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21683-274250/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1019 16:21:28.966545  278562 install.go:163] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I1019 16:21:28.966573  278562 install.go:138] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/21683-274250/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1019 16:21:28.979743  278562 install.go:163] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.37.0
	I1019 16:21:28.979793  278562 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1019 16:21:28.980350  278562 start_flags.go:410] Using suggested 6144MB memory alloc based on sys=32093MB, container=0MB
	I1019 16:21:28.980508  278562 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1019 16:21:28.980542  278562 cni.go:84] Creating CNI manager for ""
	I1019 16:21:28.980606  278562 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1019 16:21:28.980620  278562 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1019 16:21:28.980676  278562 start.go:353] cluster config:
	{Name:download-only-014353 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:6144 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:download-only-014353 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1019 16:21:28.980785  278562 iso.go:125] acquiring lock: {Name:mk7c0069e2cf0a68d4955dec96c59ff341a488dd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1019 16:21:28.982355  278562 out.go:99] Starting "download-only-014353" primary control-plane node in "download-only-014353" cluster
	I1019 16:21:28.982387  278562 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1019 16:21:29.089363  278562 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.1/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1019 16:21:29.089404  278562 cache.go:59] Caching tarball of preloaded images
	I1019 16:21:29.089633  278562 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1019 16:21:29.091345  278562 out.go:99] Downloading Kubernetes v1.34.1 preload ...
	I1019 16:21:29.091370  278562 preload.go:313] getting checksum for preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 from gcs api...
	I1019 16:21:29.206309  278562 preload.go:290] Got checksum from GCS API "d1a46823b9241c5d38b5e0866197f2a8"
	I1019 16:21:29.206370  278562 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.1/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4?checksum=md5:d1a46823b9241c5d38b5e0866197f2a8 -> /home/jenkins/minikube-integration/21683-274250/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	
	
	* The control-plane node download-only-014353 host does not exist
	  To start a cluster, run: "minikube start -p download-only-014353"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.34.1/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/DeleteAll (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.34.1/DeleteAll (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-014353
--- PASS: TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestBinaryMirror (0.66s)

                                                
                                                
=== RUN   TestBinaryMirror
I1019 16:21:42.404615  278280 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:309: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-861037 --alsologtostderr --binary-mirror http://127.0.0.1:46113 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
helpers_test.go:175: Cleaning up "binary-mirror-861037" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-861037
--- PASS: TestBinaryMirror (0.66s)

                                                
                                    
x
+
TestOffline (117.77s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-033291 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-033291 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m56.931126965s)
helpers_test.go:175: Cleaning up "offline-crio-033291" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-033291
--- PASS: TestOffline (117.77s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1000: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-305823
addons_test.go:1000: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-305823: exit status 85 (66.594173ms)

                                                
                                                
-- stdout --
	* Profile "addons-305823" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-305823"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1011: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-305823
addons_test.go:1011: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-305823: exit status 85 (65.414372ms)

                                                
                                                
-- stdout --
	* Profile "addons-305823" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-305823"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/Setup (195.13s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:108: (dbg) Run:  out/minikube-linux-amd64 start -p addons-305823 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:108: (dbg) Done: out/minikube-linux-amd64 start -p addons-305823 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (3m15.134394013s)
--- PASS: TestAddons/Setup (195.13s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.15s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:630: (dbg) Run:  kubectl --context addons-305823 create ns new-namespace
addons_test.go:644: (dbg) Run:  kubectl --context addons-305823 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.15s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (10.51s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:675: (dbg) Run:  kubectl --context addons-305823 create -f testdata/busybox.yaml
addons_test.go:682: (dbg) Run:  kubectl --context addons-305823 create sa gcp-auth-test
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [5e5ea3bb-43a2-4ca1-9b39-8c21a3399b66] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [5e5ea3bb-43a2-4ca1-9b39-8c21a3399b66] Running
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 10.004207602s
addons_test.go:694: (dbg) Run:  kubectl --context addons-305823 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:706: (dbg) Run:  kubectl --context addons-305823 describe sa gcp-auth-test
addons_test.go:744: (dbg) Run:  kubectl --context addons-305823 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (10.51s)

                                                
                                    
x
+
TestAddons/parallel/Registry (16.89s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:382: registry stabilized in 7.138572ms
addons_test.go:384: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
I1019 16:25:17.557734  278280 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1019 16:25:17.557756  278280 kapi.go:107] duration metric: took 10.304751ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
helpers_test.go:352: "registry-6b586f9694-nsk6g" [cb1792d7-001f-4116-9d14-81d9bf1296bd] Running
addons_test.go:384: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.00690824s
addons_test.go:387: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-proxy-csgnm" [feb6fb77-6deb-4201-83f4-2cdb0d1c4c94] Running
addons_test.go:387: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.00399074s
addons_test.go:392: (dbg) Run:  kubectl --context addons-305823 delete po -l run=registry-test --now
addons_test.go:397: (dbg) Run:  kubectl --context addons-305823 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:397: (dbg) Done: kubectl --context addons-305823 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (6.021452437s)
addons_test.go:411: (dbg) Run:  out/minikube-linux-amd64 -p addons-305823 ip
2025/10/19 16:25:33 [DEBUG] GET http://192.168.39.11:5000
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-305823 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (16.89s)

                                                
                                    
x
+
TestAddons/parallel/RegistryCreds (0.66s)

                                                
                                                
=== RUN   TestAddons/parallel/RegistryCreds
=== PAUSE TestAddons/parallel/RegistryCreds

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/RegistryCreds
addons_test.go:323: registry-creds stabilized in 4.140849ms
addons_test.go:325: (dbg) Run:  out/minikube-linux-amd64 addons configure registry-creds -f ./testdata/addons_testconfig.json -p addons-305823
addons_test.go:332: (dbg) Run:  kubectl --context addons-305823 -n kube-system get secret -o yaml
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-305823 addons disable registry-creds --alsologtostderr -v=1
--- PASS: TestAddons/parallel/RegistryCreds (0.66s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (6.31s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
I1019 16:25:17.547465  278280 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
helpers_test.go:352: "gadget-75v6k" [f3ec496f-83ed-47e3-97c0-28d2e46cfb97] Running
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.004324701s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-305823 addons disable inspektor-gadget --alsologtostderr -v=1
--- PASS: TestAddons/parallel/InspektorGadget (6.31s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.76s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:455: metrics-server stabilized in 6.888811ms
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:352: "metrics-server-85b7d694d7-4blgt" [e0c8ad3b-5bc4-4de6-9e24-70745935d251] Running
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.004000436s
addons_test.go:463: (dbg) Run:  kubectl --context addons-305823 top pods -n kube-system
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-305823 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.76s)

                                                
                                    
x
+
TestAddons/parallel/CSI (60.13s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:549: csi-hostpath-driver pods stabilized in 10.313647ms
addons_test.go:552: (dbg) Run:  kubectl --context addons-305823 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:557: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-305823 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-305823 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-305823 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-305823 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-305823 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-305823 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-305823 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-305823 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-305823 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-305823 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-305823 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:562: (dbg) Run:  kubectl --context addons-305823 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:567: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:352: "task-pv-pod" [2d95825b-632d-4494-814f-9af9960895a2] Pending
helpers_test.go:352: "task-pv-pod" [2d95825b-632d-4494-814f-9af9960895a2] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod" [2d95825b-632d-4494-814f-9af9960895a2] Running
addons_test.go:567: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 17.003899752s
addons_test.go:572: (dbg) Run:  kubectl --context addons-305823 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:577: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:427: (dbg) Run:  kubectl --context addons-305823 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: (dbg) Run:  kubectl --context addons-305823 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:582: (dbg) Run:  kubectl --context addons-305823 delete pod task-pv-pod
addons_test.go:582: (dbg) Done: kubectl --context addons-305823 delete pod task-pv-pod: (1.046364403s)
addons_test.go:588: (dbg) Run:  kubectl --context addons-305823 delete pvc hpvc
addons_test.go:594: (dbg) Run:  kubectl --context addons-305823 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:599: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-305823 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-305823 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-305823 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-305823 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-305823 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-305823 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-305823 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-305823 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-305823 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:604: (dbg) Run:  kubectl --context addons-305823 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:609: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:352: "task-pv-pod-restore" [ca4504ab-cff7-4ab5-9905-7f145edbdd29] Pending
helpers_test.go:352: "task-pv-pod-restore" [ca4504ab-cff7-4ab5-9905-7f145edbdd29] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod-restore" [ca4504ab-cff7-4ab5-9905-7f145edbdd29] Running
addons_test.go:609: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 13.003504203s
addons_test.go:614: (dbg) Run:  kubectl --context addons-305823 delete pod task-pv-pod-restore
addons_test.go:618: (dbg) Run:  kubectl --context addons-305823 delete pvc hpvc-restore
addons_test.go:622: (dbg) Run:  kubectl --context addons-305823 delete volumesnapshot new-snapshot-demo
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-305823 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-305823 addons disable volumesnapshots --alsologtostderr -v=1: (1.06535212s)
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-305823 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-305823 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.792306402s)
--- PASS: TestAddons/parallel/CSI (60.13s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (18.98s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:808: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-305823 --alsologtostderr -v=1
addons_test.go:813: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:352: "headlamp-6945c6f4d-vr88x" [4ae0a64d-0d61-4471-b6f2-8b5206a72104] Pending
helpers_test.go:352: "headlamp-6945c6f4d-vr88x" [4ae0a64d-0d61-4471-b6f2-8b5206a72104] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:352: "headlamp-6945c6f4d-vr88x" [4ae0a64d-0d61-4471-b6f2-8b5206a72104] Running
addons_test.go:813: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 12.004058799s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-305823 addons disable headlamp --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-305823 addons disable headlamp --alsologtostderr -v=1: (6.034016093s)
--- PASS: TestAddons/parallel/Headlamp (18.98s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (7.09s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:352: "cloud-spanner-emulator-86bd5cbb97-85n2q" [021f3e23-9477-4d8d-a12a-3f8fe13bf18f] Running
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.003550601s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-305823 addons disable cloud-spanner --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-305823 addons disable cloud-spanner --alsologtostderr -v=1: (1.080186199s)
--- PASS: TestAddons/parallel/CloudSpanner (7.09s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (20.14s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:949: (dbg) Run:  kubectl --context addons-305823 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:955: (dbg) Run:  kubectl --context addons-305823 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:959: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-305823 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-305823 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-305823 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-305823 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-305823 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-305823 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-305823 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-305823 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-305823 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-305823 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-305823 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-305823 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-305823 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-305823 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:352: "test-local-path" [1452f898-975a-40a5-af2b-4327c4f38fc2] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "test-local-path" [1452f898-975a-40a5-af2b-4327c4f38fc2] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "test-local-path" [1452f898-975a-40a5-af2b-4327c4f38fc2] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 6.004752448s
addons_test.go:967: (dbg) Run:  kubectl --context addons-305823 get pvc test-pvc -o=json
addons_test.go:976: (dbg) Run:  out/minikube-linux-amd64 -p addons-305823 ssh "cat /opt/local-path-provisioner/pvc-9e7ebf85-26d7-46d3-bf9a-511475c7798b_default_test-pvc/file1"
addons_test.go:988: (dbg) Run:  kubectl --context addons-305823 delete pod test-local-path
addons_test.go:992: (dbg) Run:  kubectl --context addons-305823 delete pvc test-pvc
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-305823 addons disable storage-provisioner-rancher --alsologtostderr -v=1
--- PASS: TestAddons/parallel/LocalPath (20.14s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.56s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:352: "nvidia-device-plugin-daemonset-dw8kx" [997025d9-b384-42a1-8304-7dc9cd3983b3] Running
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.010622682s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-305823 addons disable nvidia-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.56s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (12.61s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:352: "yakd-dashboard-5ff678cb9-2bw4w" [c8562067-7dbd-4009-b489-7dccca92177b] Running
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.006270544s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-305823 addons disable yakd --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-305823 addons disable yakd --alsologtostderr -v=1: (6.601057871s)
--- PASS: TestAddons/parallel/Yakd (12.61s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (86.45s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-305823
addons_test.go:172: (dbg) Done: out/minikube-linux-amd64 stop -p addons-305823: (1m26.15902656s)
addons_test.go:176: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-305823
addons_test.go:180: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-305823
addons_test.go:185: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-305823
--- PASS: TestAddons/StoppedEnableDisable (86.45s)

                                                
                                    
x
+
TestCertOptions (42s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-312332 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-312332 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (39.449893973s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-312332 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-312332 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-312332 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-312332" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-312332
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-312332: (2.040126251s)
--- PASS: TestCertOptions (42.00s)

                                                
                                    
x
+
TestCertExpiration (366.61s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-067580 --memory=3072 --cert-expiration=3m --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-067580 --memory=3072 --cert-expiration=3m --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m38.885191582s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-067580 --memory=3072 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-067580 --memory=3072 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m26.924600318s)
helpers_test.go:175: Cleaning up "cert-expiration-067580" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-067580
--- PASS: TestCertExpiration (366.61s)

                                                
                                    
x
+
TestForceSystemdFlag (52.21s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-960537 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-960537 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (50.943331221s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-960537 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-960537" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-960537
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-960537: (1.01900285s)
--- PASS: TestForceSystemdFlag (52.21s)

                                                
                                    
x
+
TestForceSystemdEnv (58.95s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-064535 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-064535 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (57.044571101s)
helpers_test.go:175: Cleaning up "force-systemd-env-064535" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-064535
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-064535: (1.905810497s)
--- PASS: TestForceSystemdEnv (58.95s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (1.21s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
I1019 17:20:45.240019  278280 install.go:66] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1019 17:20:45.240168  278280 install.go:138] Validating docker-machine-driver-kvm2, PATH=/tmp/TestKVMDriverInstallOrUpdate936107600/001:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
I1019 17:20:45.277878  278280 install.go:163] /tmp/TestKVMDriverInstallOrUpdate936107600/001/docker-machine-driver-kvm2 version is 1.1.1
W1019 17:20:45.278064  278280 install.go:76] docker-machine-driver-kvm2: docker-machine-driver-kvm2 is version 1.1.1, want 1.37.0
W1019 17:20:45.278237  278280 out.go:176] [unset outFile]: * Downloading driver docker-machine-driver-kvm2:
I1019 17:20:45.278489  278280 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.37.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.37.0/docker-machine-driver-kvm2-amd64.sha256 -> /tmp/TestKVMDriverInstallOrUpdate936107600/001/docker-machine-driver-kvm2
I1019 17:20:46.294458  278280 install.go:138] Validating docker-machine-driver-kvm2, PATH=/tmp/TestKVMDriverInstallOrUpdate936107600/001:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
I1019 17:20:46.312611  278280 install.go:163] /tmp/TestKVMDriverInstallOrUpdate936107600/001/docker-machine-driver-kvm2 version is 1.37.0
--- PASS: TestKVMDriverInstallOrUpdate (1.21s)

                                                
                                    
x
+
TestErrorSpam/setup (36.61s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-104621 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-104621 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
E1019 16:29:58.888787  278280 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-274250/.minikube/profiles/addons-305823/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 16:29:58.902897  278280 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-274250/.minikube/profiles/addons-305823/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 16:29:58.914337  278280 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-274250/.minikube/profiles/addons-305823/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 16:29:58.935699  278280 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-274250/.minikube/profiles/addons-305823/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 16:29:58.977070  278280 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-274250/.minikube/profiles/addons-305823/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 16:29:59.058563  278280 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-274250/.minikube/profiles/addons-305823/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 16:29:59.220224  278280 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-274250/.minikube/profiles/addons-305823/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 16:29:59.541904  278280 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-274250/.minikube/profiles/addons-305823/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 16:30:00.184154  278280 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-274250/.minikube/profiles/addons-305823/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-104621 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-104621 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (36.610256283s)
--- PASS: TestErrorSpam/setup (36.61s)

                                                
                                    
x
+
TestErrorSpam/start (0.35s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-104621 --log_dir /tmp/nospam-104621 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-104621 --log_dir /tmp/nospam-104621 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-104621 --log_dir /tmp/nospam-104621 start --dry-run
--- PASS: TestErrorSpam/start (0.35s)

                                                
                                    
x
+
TestErrorSpam/status (0.77s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-104621 --log_dir /tmp/nospam-104621 status
E1019 16:30:01.466428  278280 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-274250/.minikube/profiles/addons-305823/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-104621 --log_dir /tmp/nospam-104621 status
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-104621 --log_dir /tmp/nospam-104621 status
--- PASS: TestErrorSpam/status (0.77s)

                                                
                                    
x
+
TestErrorSpam/pause (1.65s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-104621 --log_dir /tmp/nospam-104621 pause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-104621 --log_dir /tmp/nospam-104621 pause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-104621 --log_dir /tmp/nospam-104621 pause
--- PASS: TestErrorSpam/pause (1.65s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.79s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-104621 --log_dir /tmp/nospam-104621 unpause
E1019 16:30:04.027739  278280 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-274250/.minikube/profiles/addons-305823/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-104621 --log_dir /tmp/nospam-104621 unpause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-104621 --log_dir /tmp/nospam-104621 unpause
--- PASS: TestErrorSpam/unpause (1.79s)

                                                
                                    
x
+
TestErrorSpam/stop (77.04s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-104621 --log_dir /tmp/nospam-104621 stop
E1019 16:30:09.149439  278280 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-274250/.minikube/profiles/addons-305823/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 16:30:19.391110  278280 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-274250/.minikube/profiles/addons-305823/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 16:30:39.873196  278280 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-274250/.minikube/profiles/addons-305823/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
error_spam_test.go:149: (dbg) Done: out/minikube-linux-amd64 -p nospam-104621 --log_dir /tmp/nospam-104621 stop: (1m14.631911182s)
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-104621 --log_dir /tmp/nospam-104621 stop
E1019 16:31:20.835338  278280 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-274250/.minikube/profiles/addons-305823/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
error_spam_test.go:149: (dbg) Done: out/minikube-linux-amd64 -p nospam-104621 --log_dir /tmp/nospam-104621 stop: (1.20303599s)
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-104621 --log_dir /tmp/nospam-104621 stop
error_spam_test.go:172: (dbg) Done: out/minikube-linux-amd64 -p nospam-104621 --log_dir /tmp/nospam-104621 stop: (1.200692659s)
--- PASS: TestErrorSpam/stop (77.04s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/21683-274250/.minikube/files/etc/test/nested/copy/278280/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (49.93s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-amd64 start -p functional-244936 --memory=4096 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
functional_test.go:2239: (dbg) Done: out/minikube-linux-amd64 start -p functional-244936 --memory=4096 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (49.9305984s)
--- PASS: TestFunctional/serial/StartWithProxy (49.93s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (31s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1019 16:32:12.947901  278280 config.go:182] Loaded profile config "functional-244936": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
functional_test.go:674: (dbg) Run:  out/minikube-linux-amd64 start -p functional-244936 --alsologtostderr -v=8
E1019 16:32:42.757334  278280 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-274250/.minikube/profiles/addons-305823/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:674: (dbg) Done: out/minikube-linux-amd64 start -p functional-244936 --alsologtostderr -v=8: (31.003152058s)
functional_test.go:678: soft start took 31.003966021s for "functional-244936" cluster.
I1019 16:32:43.951476  278280 config.go:182] Loaded profile config "functional-244936": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestFunctional/serial/SoftStart (31.00s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-244936 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.49s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-244936 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-244936 cache add registry.k8s.io/pause:3.1: (1.221062457s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-244936 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-244936 cache add registry.k8s.io/pause:3.3: (1.214700859s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-244936 cache add registry.k8s.io/pause:latest
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-244936 cache add registry.k8s.io/pause:latest: (1.049939815s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.49s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (2.25s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-244936 /tmp/TestFunctionalserialCacheCmdcacheadd_local287022611/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-amd64 -p functional-244936 cache add minikube-local-cache-test:functional-244936
functional_test.go:1104: (dbg) Done: out/minikube-linux-amd64 -p functional-244936 cache add minikube-local-cache-test:functional-244936: (1.933658246s)
functional_test.go:1109: (dbg) Run:  out/minikube-linux-amd64 -p functional-244936 cache delete minikube-local-cache-test:functional-244936
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-244936
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (2.25s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.22s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-amd64 -p functional-244936 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.22s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.65s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-amd64 -p functional-244936 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 -p functional-244936 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-244936 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (218.202028ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-amd64 -p functional-244936 cache reload
functional_test.go:1178: (dbg) Run:  out/minikube-linux-amd64 -p functional-244936 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.65s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.10s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-amd64 -p functional-244936 kubectl -- --context functional-244936 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-244936 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (34.89s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-amd64 start -p functional-244936 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:772: (dbg) Done: out/minikube-linux-amd64 start -p functional-244936 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (34.887320976s)
functional_test.go:776: restart took 34.887456434s for "functional-244936" cluster.
I1019 16:33:26.990680  278280 config.go:182] Loaded profile config "functional-244936": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestFunctional/serial/ExtraConfig (34.89s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-244936 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.38s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-amd64 -p functional-244936 logs
functional_test.go:1251: (dbg) Done: out/minikube-linux-amd64 -p functional-244936 logs: (1.38332193s)
--- PASS: TestFunctional/serial/LogsCmd (1.38s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.42s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-amd64 -p functional-244936 logs --file /tmp/TestFunctionalserialLogsFileCmd3570251512/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-amd64 -p functional-244936 logs --file /tmp/TestFunctionalserialLogsFileCmd3570251512/001/logs.txt: (1.403198893s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.42s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.23s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-244936 apply -f testdata/invalidsvc.yaml
functional_test.go:2340: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-244936
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-244936: exit status 115 (338.844575ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬─────────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │             URL             │
	├───────────┼─────────────┼─────────────┼─────────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.39.175:30383 │
	└───────────┴─────────────┴─────────────┴─────────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-244936 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.23s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-244936 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-244936 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-244936 config get cpus: exit status 14 (66.211374ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-244936 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-244936 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-244936 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-244936 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-244936 config get cpus: exit status 14 (48.166502ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (14.89s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-244936 --alsologtostderr -v=1]
functional_test.go:925: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-244936 --alsologtostderr -v=1] ...
helpers_test.go:525: unable to kill pid 286855: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (14.89s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-amd64 start -p functional-244936 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-244936 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: exit status 23 (167.692701ms)

                                                
                                                
-- stdout --
	* [functional-244936] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21683
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21683-274250/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21683-274250/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1019 16:34:06.580448  287232 out.go:360] Setting OutFile to fd 1 ...
	I1019 16:34:06.580608  287232 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 16:34:06.580619  287232 out.go:374] Setting ErrFile to fd 2...
	I1019 16:34:06.580625  287232 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 16:34:06.580993  287232 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21683-274250/.minikube/bin
	I1019 16:34:06.581614  287232 out.go:368] Setting JSON to false
	I1019 16:34:06.583031  287232 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":8189,"bootTime":1760883458,"procs":234,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1019 16:34:06.583170  287232 start.go:143] virtualization: kvm guest
	I1019 16:34:06.584399  287232 out.go:179] * [functional-244936] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1019 16:34:06.585604  287232 out.go:179]   - MINIKUBE_LOCATION=21683
	I1019 16:34:06.585672  287232 notify.go:221] Checking for updates...
	I1019 16:34:06.587634  287232 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1019 16:34:06.588848  287232 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21683-274250/kubeconfig
	I1019 16:34:06.589876  287232 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21683-274250/.minikube
	I1019 16:34:06.590729  287232 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1019 16:34:06.591719  287232 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1019 16:34:06.593238  287232 config.go:182] Loaded profile config "functional-244936": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 16:34:06.593964  287232 main.go:143] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1019 16:34:06.594064  287232 main.go:143] libmachine: Launching plugin server for driver kvm2
	I1019 16:34:06.615928  287232 main.go:143] libmachine: Plugin server listening at address 127.0.0.1:43775
	I1019 16:34:06.616492  287232 main.go:143] libmachine: () Calling .GetVersion
	I1019 16:34:06.617495  287232 main.go:143] libmachine: Using API Version  1
	I1019 16:34:06.617518  287232 main.go:143] libmachine: () Calling .SetConfigRaw
	I1019 16:34:06.618080  287232 main.go:143] libmachine: () Calling .GetMachineName
	I1019 16:34:06.618352  287232 main.go:143] libmachine: (functional-244936) Calling .DriverName
	I1019 16:34:06.618688  287232 driver.go:422] Setting default libvirt URI to qemu:///system
	I1019 16:34:06.619141  287232 main.go:143] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1019 16:34:06.619279  287232 main.go:143] libmachine: Launching plugin server for driver kvm2
	I1019 16:34:06.634265  287232 main.go:143] libmachine: Plugin server listening at address 127.0.0.1:38203
	I1019 16:34:06.634791  287232 main.go:143] libmachine: () Calling .GetVersion
	I1019 16:34:06.635438  287232 main.go:143] libmachine: Using API Version  1
	I1019 16:34:06.635462  287232 main.go:143] libmachine: () Calling .SetConfigRaw
	I1019 16:34:06.635790  287232 main.go:143] libmachine: () Calling .GetMachineName
	I1019 16:34:06.635997  287232 main.go:143] libmachine: (functional-244936) Calling .DriverName
	I1019 16:34:06.666821  287232 out.go:179] * Using the kvm2 driver based on existing profile
	I1019 16:34:06.667849  287232 start.go:309] selected driver: kvm2
	I1019 16:34:06.667868  287232 start.go:930] validating driver "kvm2" against &{Name:functional-244936 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.1 ClusterName:functional-244936 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.175 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiratio
n:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1019 16:34:06.667993  287232 start.go:941] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1019 16:34:06.670026  287232 out.go:203] 
	W1019 16:34:06.670656  287232 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1019 16:34:06.671518  287232 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-amd64 start -p functional-244936 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
--- PASS: TestFunctional/parallel/DryRun (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-amd64 start -p functional-244936 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-244936 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: exit status 23 (137.555373ms)

                                                
                                                
-- stdout --
	* [functional-244936] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21683
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21683-274250/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21683-274250/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1019 16:33:51.040689  286263 out.go:360] Setting OutFile to fd 1 ...
	I1019 16:33:51.040779  286263 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 16:33:51.040783  286263 out.go:374] Setting ErrFile to fd 2...
	I1019 16:33:51.040787  286263 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 16:33:51.041140  286263 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21683-274250/.minikube/bin
	I1019 16:33:51.041602  286263 out.go:368] Setting JSON to false
	I1019 16:33:51.042621  286263 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":8173,"bootTime":1760883458,"procs":194,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1019 16:33:51.042725  286263 start.go:143] virtualization: kvm guest
	I1019 16:33:51.044393  286263 out.go:179] * [functional-244936] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I1019 16:33:51.045419  286263 out.go:179]   - MINIKUBE_LOCATION=21683
	I1019 16:33:51.045426  286263 notify.go:221] Checking for updates...
	I1019 16:33:51.047608  286263 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1019 16:33:51.048690  286263 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21683-274250/kubeconfig
	I1019 16:33:51.049739  286263 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21683-274250/.minikube
	I1019 16:33:51.050783  286263 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1019 16:33:51.051707  286263 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1019 16:33:51.053280  286263 config.go:182] Loaded profile config "functional-244936": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 16:33:51.053944  286263 main.go:143] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1019 16:33:51.054034  286263 main.go:143] libmachine: Launching plugin server for driver kvm2
	I1019 16:33:51.070456  286263 main.go:143] libmachine: Plugin server listening at address 127.0.0.1:39949
	I1019 16:33:51.070971  286263 main.go:143] libmachine: () Calling .GetVersion
	I1019 16:33:51.071488  286263 main.go:143] libmachine: Using API Version  1
	I1019 16:33:51.071510  286263 main.go:143] libmachine: () Calling .SetConfigRaw
	I1019 16:33:51.071907  286263 main.go:143] libmachine: () Calling .GetMachineName
	I1019 16:33:51.072106  286263 main.go:143] libmachine: (functional-244936) Calling .DriverName
	I1019 16:33:51.072412  286263 driver.go:422] Setting default libvirt URI to qemu:///system
	I1019 16:33:51.072843  286263 main.go:143] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1019 16:33:51.072921  286263 main.go:143] libmachine: Launching plugin server for driver kvm2
	I1019 16:33:51.086521  286263 main.go:143] libmachine: Plugin server listening at address 127.0.0.1:41463
	I1019 16:33:51.086958  286263 main.go:143] libmachine: () Calling .GetVersion
	I1019 16:33:51.087433  286263 main.go:143] libmachine: Using API Version  1
	I1019 16:33:51.087463  286263 main.go:143] libmachine: () Calling .SetConfigRaw
	I1019 16:33:51.087806  286263 main.go:143] libmachine: () Calling .GetMachineName
	I1019 16:33:51.088045  286263 main.go:143] libmachine: (functional-244936) Calling .DriverName
	I1019 16:33:51.118317  286263 out.go:179] * Utilisation du pilote kvm2 basé sur le profil existant
	I1019 16:33:51.119170  286263 start.go:309] selected driver: kvm2
	I1019 16:33:51.119186  286263 start.go:930] validating driver "kvm2" against &{Name:functional-244936 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.1 ClusterName:functional-244936 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.175 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mo
untString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1019 16:33:51.119285  286263 start.go:941] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1019 16:33:51.120884  286263 out.go:203] 
	W1019 16:33:51.121677  286263 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1019 16:33:51.122541  286263 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.8s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-amd64 -p functional-244936 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-amd64 -p functional-244936 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-amd64 -p functional-244936 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.80s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (21.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-244936 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-244936 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:352: "hello-node-connect-7d85dfc575-gb5zm" [25080589-a08c-48bb-b0d9-48461a3d0f09] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:352: "hello-node-connect-7d85dfc575-gb5zm" [25080589-a08c-48bb-b0d9-48461a3d0f09] Running
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 21.005821853s
functional_test.go:1654: (dbg) Run:  out/minikube-linux-amd64 -p functional-244936 service hello-node-connect --url
functional_test.go:1660: found endpoint for hello-node-connect: http://192.168.39.175:30700
functional_test.go:1680: http://192.168.39.175:30700: success! body:
Request served by hello-node-connect-7d85dfc575-gb5zm

                                                
                                                
HTTP/1.1 GET /

                                                
                                                
Host: 192.168.39.175:30700
Accept-Encoding: gzip
User-Agent: Go-http-client/1.1
--- PASS: TestFunctional/parallel/ServiceCmdConnect (21.60s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-amd64 -p functional-244936 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-amd64 -p functional-244936 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (44.83s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:352: "storage-provisioner" [b6a3b455-0931-42fe-90ef-d648250a6dbf] Running
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.008660655s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-244936 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-244936 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-244936 get pvc myclaim -o=json
I1019 16:33:40.424885  278280 retry.go:31] will retry after 1.04723131s: testpvc phase = "Pending", want "Bound" (msg={TypeMeta:{Kind:PersistentVolumeClaim APIVersion:v1} ObjectMeta:{Name:myclaim GenerateName: Namespace:default SelfLink: UID:514c273b-be41-4107-a877-7c100d50ca59 ResourceVersion:726 Generation:0 CreationTimestamp:2025-10-19 16:33:40 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[] Annotations:map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["ReadWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] OwnerReferences:[] Finalizers:[kubernetes.io/pvc-protection] ManagedFields:[]} Spec:{AccessModes:[ReadWriteOnce] Selector:nil Resources:{Limits:map[] Requests:map[storage:{i:{value:524288000 scale:0} d:{Dec:<nil>} s:500Mi Format:BinarySI}]} VolumeName: StorageClassName:0xc0017c33c0 VolumeMode:0xc0017c33d0 DataSource:nil DataSourceRef:nil VolumeAttributesClassName:<nil>} Status:{Phase:Pending AccessModes:[] Capacity:map[] Conditions:[] AllocatedResources:map[] AllocatedResourceStatuses:map[] CurrentVolumeAttributesClassName:<nil> ModifyVolumeStatus:nil}})
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-244936 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-244936 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [2f3ddf7e-ac03-4260-b2db-496ab30a19b1] Pending
helpers_test.go:352: "sp-pod" [2f3ddf7e-ac03-4260-b2db-496ab30a19b1] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [2f3ddf7e-ac03-4260-b2db-496ab30a19b1] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 23.004207267s
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-244936 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:112: (dbg) Run:  kubectl --context functional-244936 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-244936 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [5d36ad28-51d7-484b-b97b-2f6de26073b9] Pending
helpers_test.go:352: "sp-pod" [5d36ad28-51d7-484b-b97b-2f6de26073b9] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [5d36ad28-51d7-484b-b97b-2f6de26073b9] Running
2025/10/19 16:34:15 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 13.00404921s
functional_test_pvc_test.go:120: (dbg) Run:  kubectl --context functional-244936 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (44.83s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-amd64 -p functional-244936 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-amd64 -p functional-244936 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-244936 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-244936 ssh -n functional-244936 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-244936 cp functional-244936:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd2443976634/001/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-244936 ssh -n functional-244936 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-244936 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-244936 ssh -n functional-244936 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.28s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (23.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1798: (dbg) Run:  kubectl --context functional-244936 replace --force -f testdata/mysql.yaml
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:352: "mysql-5bb876957f-jtfqz" [06d335c4-c7ac-4f6d-8161-049ac784fd52] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:352: "mysql-5bb876957f-jtfqz" [06d335c4-c7ac-4f6d-8161-049ac784fd52] Running
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 20.014638337s
functional_test.go:1812: (dbg) Run:  kubectl --context functional-244936 exec mysql-5bb876957f-jtfqz -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-244936 exec mysql-5bb876957f-jtfqz -- mysql -ppassword -e "show databases;": exit status 1 (184.536951ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1019 16:33:57.156108  278280 retry.go:31] will retry after 1.427851542s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-244936 exec mysql-5bb876957f-jtfqz -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-244936 exec mysql-5bb876957f-jtfqz -- mysql -ppassword -e "show databases;": exit status 1 (151.22239ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1019 16:33:58.736375  278280 retry.go:31] will retry after 1.195592297s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-244936 exec mysql-5bb876957f-jtfqz -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (23.26s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/278280/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-amd64 -p functional-244936 ssh "sudo cat /etc/test/nested/copy/278280/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/278280.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-244936 ssh "sudo cat /etc/ssl/certs/278280.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/278280.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-244936 ssh "sudo cat /usr/share/ca-certificates/278280.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-244936 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/2782802.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-244936 ssh "sudo cat /etc/ssl/certs/2782802.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/2782802.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-244936 ssh "sudo cat /usr/share/ca-certificates/2782802.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-244936 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.16s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-244936 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-244936 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-244936 ssh "sudo systemctl is-active docker": exit status 1 (215.063237ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-244936 ssh "sudo systemctl is-active containerd"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-244936 ssh "sudo systemctl is-active containerd": exit status 1 (209.805775ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (9.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-244936 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-244936 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:352: "hello-node-75c85bcc94-l5fnq" [3fcd3660-0142-4621-8e21-c03be2536ab3] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:352: "hello-node-75c85bcc94-l5fnq" [3fcd3660-0142-4621-8e21-c03be2536ab3] Running
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 9.004920287s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (9.18s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-amd64 -p functional-244936 version --short
--- PASS: TestFunctional/parallel/Version/short (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-amd64 -p functional-244936 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-244936 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-244936 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.34.1
registry.k8s.io/kube-proxy:v1.34.1
registry.k8s.io/kube-controller-manager:v1.34.1
registry.k8s.io/kube-apiserver:v1.34.1
registry.k8s.io/etcd:3.6.4-0
registry.k8s.io/coredns/coredns:v1.12.1
localhost/minikube-local-cache-test:functional-244936
localhost/kicbase/echo-server:functional-244936
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/mysql:5.7
docker.io/kindest/kindnetd:v20250512-df8de77b
docker.io/kicbase/echo-server:latest
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-244936 image ls --format short --alsologtostderr:
I1019 16:34:07.741426  287571 out.go:360] Setting OutFile to fd 1 ...
I1019 16:34:07.741725  287571 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1019 16:34:07.741735  287571 out.go:374] Setting ErrFile to fd 2...
I1019 16:34:07.741738  287571 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1019 16:34:07.741972  287571 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21683-274250/.minikube/bin
I1019 16:34:07.742627  287571 config.go:182] Loaded profile config "functional-244936": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1019 16:34:07.742758  287571 config.go:182] Loaded profile config "functional-244936": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1019 16:34:07.743162  287571 main.go:143] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1019 16:34:07.743226  287571 main.go:143] libmachine: Launching plugin server for driver kvm2
I1019 16:34:07.758044  287571 main.go:143] libmachine: Plugin server listening at address 127.0.0.1:42041
I1019 16:34:07.758540  287571 main.go:143] libmachine: () Calling .GetVersion
I1019 16:34:07.759170  287571 main.go:143] libmachine: Using API Version  1
I1019 16:34:07.759207  287571 main.go:143] libmachine: () Calling .SetConfigRaw
I1019 16:34:07.759544  287571 main.go:143] libmachine: () Calling .GetMachineName
I1019 16:34:07.759762  287571 main.go:143] libmachine: (functional-244936) Calling .GetState
I1019 16:34:07.761675  287571 main.go:143] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1019 16:34:07.761715  287571 main.go:143] libmachine: Launching plugin server for driver kvm2
I1019 16:34:07.775235  287571 main.go:143] libmachine: Plugin server listening at address 127.0.0.1:46641
I1019 16:34:07.775731  287571 main.go:143] libmachine: () Calling .GetVersion
I1019 16:34:07.776319  287571 main.go:143] libmachine: Using API Version  1
I1019 16:34:07.776348  287571 main.go:143] libmachine: () Calling .SetConfigRaw
I1019 16:34:07.776759  287571 main.go:143] libmachine: () Calling .GetMachineName
I1019 16:34:07.776965  287571 main.go:143] libmachine: (functional-244936) Calling .DriverName
I1019 16:34:07.777231  287571 ssh_runner.go:195] Run: systemctl --version
I1019 16:34:07.777265  287571 main.go:143] libmachine: (functional-244936) Calling .GetSSHHostname
I1019 16:34:07.780607  287571 main.go:143] libmachine: (functional-244936) DBG | domain functional-244936 has defined MAC address 52:54:00:4b:5f:1f in network mk-functional-244936
I1019 16:34:07.781208  287571 main.go:143] libmachine: (functional-244936) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:5f:1f", ip: ""} in network mk-functional-244936: {Iface:virbr1 ExpiryTime:2025-10-19 17:31:38 +0000 UTC Type:0 Mac:52:54:00:4b:5f:1f Iaid: IPaddr:192.168.39.175 Prefix:24 Hostname:functional-244936 Clientid:01:52:54:00:4b:5f:1f}
I1019 16:34:07.781244  287571 main.go:143] libmachine: (functional-244936) DBG | domain functional-244936 has defined IP address 192.168.39.175 and MAC address 52:54:00:4b:5f:1f in network mk-functional-244936
I1019 16:34:07.781427  287571 main.go:143] libmachine: (functional-244936) Calling .GetSSHPort
I1019 16:34:07.781616  287571 main.go:143] libmachine: (functional-244936) Calling .GetSSHKeyPath
I1019 16:34:07.781756  287571 main.go:143] libmachine: (functional-244936) Calling .GetSSHUsername
I1019 16:34:07.781897  287571 sshutil.go:53] new ssh client: &{IP:192.168.39.175 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21683-274250/.minikube/machines/functional-244936/id_rsa Username:docker}
I1019 16:34:07.866363  287571 ssh_runner.go:195] Run: sudo crictl images --output json
I1019 16:34:07.910731  287571 main.go:143] libmachine: Making call to close driver server
I1019 16:34:07.910746  287571 main.go:143] libmachine: (functional-244936) Calling .Close
I1019 16:34:07.911103  287571 main.go:143] libmachine: Successfully made call to close driver server
I1019 16:34:07.911121  287571 main.go:143] libmachine: Making call to close connection to plugin binary
I1019 16:34:07.911129  287571 main.go:143] libmachine: Making call to close driver server
I1019 16:34:07.911136  287571 main.go:143] libmachine: (functional-244936) Calling .Close
I1019 16:34:07.911138  287571 main.go:143] libmachine: (functional-244936) DBG | Closing plugin on server side
I1019 16:34:07.911441  287571 main.go:143] libmachine: Successfully made call to close driver server
I1019 16:34:07.911456  287571 main.go:143] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-244936 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-244936 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────┬────────────────────┬───────────────┬────────┐
│                  IMAGE                  │        TAG         │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────┼────────────────────┼───────────────┼────────┤
│ registry.k8s.io/pause                   │ 3.3                │ 0184c1613d929 │ 686kB  │
│ docker.io/kindest/kindnetd              │ v20250512-df8de77b │ 409467f978b4a │ 109MB  │
│ registry.k8s.io/coredns/coredns         │ v1.12.1            │ 52546a367cc9e │ 76.1MB │
│ registry.k8s.io/etcd                    │ 3.6.4-0            │ 5f1f5298c888d │ 196MB  │
│ registry.k8s.io/kube-controller-manager │ v1.34.1            │ c80c8dbafe7dd │ 76MB   │
│ registry.k8s.io/kube-apiserver          │ v1.34.1            │ c3994bc696102 │ 89MB   │
│ registry.k8s.io/pause                   │ 3.1                │ da86e6ba6ca19 │ 747kB  │
│ registry.k8s.io/pause                   │ 3.10.1             │ cd073f4c5f6a8 │ 742kB  │
│ registry.k8s.io/pause                   │ latest             │ 350b164e7ae1d │ 247kB  │
│ docker.io/library/mysql                 │ 5.7                │ 5107333e08a87 │ 520MB  │
│ docker.io/library/nginx                 │ latest             │ 07ccdb7838758 │ 164MB  │
│ gcr.io/k8s-minikube/busybox             │ 1.28.4-glibc       │ 56cc512116c8f │ 4.63MB │
│ registry.k8s.io/kube-scheduler          │ v1.34.1            │ 7dd6aaa1717ab │ 53.8MB │
│ docker.io/kicbase/echo-server           │ latest             │ 9056ab77afb8e │ 4.95MB │
│ localhost/kicbase/echo-server           │ functional-244936  │ 9056ab77afb8e │ 4.95MB │
│ gcr.io/k8s-minikube/storage-provisioner │ v5                 │ 6e38f40d628db │ 31.5MB │
│ localhost/minikube-local-cache-test     │ functional-244936  │ ce781865d086b │ 3.33kB │
│ registry.k8s.io/kube-proxy              │ v1.34.1            │ fc25172553d79 │ 73.1MB │
└─────────────────────────────────────────┴────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-244936 image ls --format table --alsologtostderr:
I1019 16:34:08.210445  287684 out.go:360] Setting OutFile to fd 1 ...
I1019 16:34:08.210664  287684 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1019 16:34:08.210672  287684 out.go:374] Setting ErrFile to fd 2...
I1019 16:34:08.210676  287684 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1019 16:34:08.210865  287684 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21683-274250/.minikube/bin
I1019 16:34:08.211455  287684 config.go:182] Loaded profile config "functional-244936": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1019 16:34:08.211544  287684 config.go:182] Loaded profile config "functional-244936": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1019 16:34:08.211888  287684 main.go:143] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1019 16:34:08.211931  287684 main.go:143] libmachine: Launching plugin server for driver kvm2
I1019 16:34:08.224499  287684 main.go:143] libmachine: Plugin server listening at address 127.0.0.1:46879
I1019 16:34:08.224974  287684 main.go:143] libmachine: () Calling .GetVersion
I1019 16:34:08.225490  287684 main.go:143] libmachine: Using API Version  1
I1019 16:34:08.225512  287684 main.go:143] libmachine: () Calling .SetConfigRaw
I1019 16:34:08.225950  287684 main.go:143] libmachine: () Calling .GetMachineName
I1019 16:34:08.226171  287684 main.go:143] libmachine: (functional-244936) Calling .GetState
I1019 16:34:08.228352  287684 main.go:143] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1019 16:34:08.228388  287684 main.go:143] libmachine: Launching plugin server for driver kvm2
I1019 16:34:08.244544  287684 main.go:143] libmachine: Plugin server listening at address 127.0.0.1:44501
I1019 16:34:08.244892  287684 main.go:143] libmachine: () Calling .GetVersion
I1019 16:34:08.245257  287684 main.go:143] libmachine: Using API Version  1
I1019 16:34:08.245281  287684 main.go:143] libmachine: () Calling .SetConfigRaw
I1019 16:34:08.245722  287684 main.go:143] libmachine: () Calling .GetMachineName
I1019 16:34:08.245962  287684 main.go:143] libmachine: (functional-244936) Calling .DriverName
I1019 16:34:08.246232  287684 ssh_runner.go:195] Run: systemctl --version
I1019 16:34:08.246261  287684 main.go:143] libmachine: (functional-244936) Calling .GetSSHHostname
I1019 16:34:08.249456  287684 main.go:143] libmachine: (functional-244936) DBG | domain functional-244936 has defined MAC address 52:54:00:4b:5f:1f in network mk-functional-244936
I1019 16:34:08.249919  287684 main.go:143] libmachine: (functional-244936) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:5f:1f", ip: ""} in network mk-functional-244936: {Iface:virbr1 ExpiryTime:2025-10-19 17:31:38 +0000 UTC Type:0 Mac:52:54:00:4b:5f:1f Iaid: IPaddr:192.168.39.175 Prefix:24 Hostname:functional-244936 Clientid:01:52:54:00:4b:5f:1f}
I1019 16:34:08.249945  287684 main.go:143] libmachine: (functional-244936) DBG | domain functional-244936 has defined IP address 192.168.39.175 and MAC address 52:54:00:4b:5f:1f in network mk-functional-244936
I1019 16:34:08.250181  287684 main.go:143] libmachine: (functional-244936) Calling .GetSSHPort
I1019 16:34:08.250361  287684 main.go:143] libmachine: (functional-244936) Calling .GetSSHKeyPath
I1019 16:34:08.250518  287684 main.go:143] libmachine: (functional-244936) Calling .GetSSHUsername
I1019 16:34:08.250684  287684 sshutil.go:53] new ssh client: &{IP:192.168.39.175 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21683-274250/.minikube/machines/functional-244936/id_rsa Username:docker}
I1019 16:34:08.331348  287684 ssh_runner.go:195] Run: sudo crictl images --output json
I1019 16:34:08.377660  287684 main.go:143] libmachine: Making call to close driver server
I1019 16:34:08.377683  287684 main.go:143] libmachine: (functional-244936) Calling .Close
I1019 16:34:08.377998  287684 main.go:143] libmachine: Successfully made call to close driver server
I1019 16:34:08.378019  287684 main.go:143] libmachine: Making call to close connection to plugin binary
I1019 16:34:08.378017  287684 main.go:143] libmachine: (functional-244936) DBG | Closing plugin on server side
I1019 16:34:08.378028  287684 main.go:143] libmachine: Making call to close driver server
I1019 16:34:08.378036  287684 main.go:143] libmachine: (functional-244936) Calling .Close
I1019 16:34:08.378324  287684 main.go:143] libmachine: Successfully made call to close driver server
I1019 16:34:08.378357  287684 main.go:143] libmachine: (functional-244936) DBG | Closing plugin on server side
I1019 16:34:08.378367  287684 main.go:143] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-244936 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-244936 image ls --format json --alsologtostderr:
[{"id":"409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a","docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"109379124"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a","docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"43824855"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"r
epoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969","repoDigests":["registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998","registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"],"repoTags":["registry.k8s.io/coredns/coredns:v1.12.1"],"size":"76103547"},{"id":"c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89","registry.k8s.io/kube-controller-manager@sha256:a6fe41965f1693c8a73ebe75e215d0b7c0902732c66c6692b0dbcfb0f077c992"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.34.1"],"size":"76004181"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea
929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":["docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb","docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da"],"repoTags":["docker.io/library/mysql:5.7"],"size":"519571821"},{"id":"c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97","repoDigests":["registry.k8s.io/kube-apiserver@sha256:264da1e0ab552e24b2eb034a1b75745df78fe8903bade1fa0f874f9167dad964","registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902"],"repoTags":["registry.k8s.io/kube-apiserver:v1.34.1"],"size":"89046001"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"24
7077"},{"id":"fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7","repoDigests":["registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a","registry.k8s.io/kube-proxy@sha256:9e876d245c76f0e3529c82bb103b60a59c4e190317827f977ab696cc4f43020a"],"repoTags":["registry.k8s.io/kube-proxy:v1.34.1"],"size":"73138073"},{"id":"07ccdb7838758e758a4d52a9761636c385125a327355c0c94a6acff9babff938","repoDigests":["docker.io/library/nginx@sha256:35fabd32a7582bed5da0a40f41fd4984df7ddff32f81cd6be4614d07240ec115","docker.io/library/nginx@sha256:3b7732505933ca591ce4a6d860cb713ad96a3176b82f7979a8dfa9973486a0d6"],"repoTags":["docker.io/library/nginx:latest"],"size":"163615579"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4eb
f583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c","registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"742092"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"ce781865d086ba6fc9334ee0ab7cdb2a9a915d3ed77c824f5ece95fb4ecc90c1","repoDigests":["localhost/minikube-local-cache-test@sha256:7ba17b96c2fe57af9f5c19cd62441a5a0afb2024b8966809f1b7bd15ec77d23e"],"repoTags":["localhost/minikube-local-cache-test:functional-244936"],"size":"3330"},{"id":"5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0
115","repoDigests":["registry.k8s.io/etcd@sha256:71170330936954286be203a7737459f2838dd71cc79f8ffaac91548a9e079b8f","registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19"],"repoTags":["registry.k8s.io/etcd:3.6.4-0"],"size":"195976448"},{"id":"7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813","repoDigests":["registry.k8s.io/kube-scheduler@sha256:47306e2178d9766fe3fe9eada02fa995f9f29dcbf518832293dfbe16964e2d31","registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500"],"repoTags":["registry.k8s.io/kube-scheduler:v1.34.1"],"size":"53844823"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":["docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6","docker.io/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86","docker.io/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aa
e68296150078d379c30cf","localhost/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6","localhost/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86","localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf"],"repoTags":["docker.io/kicbase/echo-server:latest","localhost/kicbase/echo-server:functional-244936"],"size":"4945146"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-244936 image ls --format json --alsologtostderr:
I1019 16:34:07.985689  287629 out.go:360] Setting OutFile to fd 1 ...
I1019 16:34:07.985963  287629 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1019 16:34:07.985974  287629 out.go:374] Setting ErrFile to fd 2...
I1019 16:34:07.985993  287629 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1019 16:34:07.986214  287629 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21683-274250/.minikube/bin
I1019 16:34:07.986736  287629 config.go:182] Loaded profile config "functional-244936": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1019 16:34:07.986849  287629 config.go:182] Loaded profile config "functional-244936": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1019 16:34:07.987262  287629 main.go:143] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1019 16:34:07.987339  287629 main.go:143] libmachine: Launching plugin server for driver kvm2
I1019 16:34:08.000171  287629 main.go:143] libmachine: Plugin server listening at address 127.0.0.1:45113
I1019 16:34:08.000601  287629 main.go:143] libmachine: () Calling .GetVersion
I1019 16:34:08.001094  287629 main.go:143] libmachine: Using API Version  1
I1019 16:34:08.001130  287629 main.go:143] libmachine: () Calling .SetConfigRaw
I1019 16:34:08.001554  287629 main.go:143] libmachine: () Calling .GetMachineName
I1019 16:34:08.001824  287629 main.go:143] libmachine: (functional-244936) Calling .GetState
I1019 16:34:08.003950  287629 main.go:143] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1019 16:34:08.004050  287629 main.go:143] libmachine: Launching plugin server for driver kvm2
I1019 16:34:08.016553  287629 main.go:143] libmachine: Plugin server listening at address 127.0.0.1:43057
I1019 16:34:08.017006  287629 main.go:143] libmachine: () Calling .GetVersion
I1019 16:34:08.017464  287629 main.go:143] libmachine: Using API Version  1
I1019 16:34:08.017484  287629 main.go:143] libmachine: () Calling .SetConfigRaw
I1019 16:34:08.017803  287629 main.go:143] libmachine: () Calling .GetMachineName
I1019 16:34:08.018002  287629 main.go:143] libmachine: (functional-244936) Calling .DriverName
I1019 16:34:08.018226  287629 ssh_runner.go:195] Run: systemctl --version
I1019 16:34:08.018259  287629 main.go:143] libmachine: (functional-244936) Calling .GetSSHHostname
I1019 16:34:08.021320  287629 main.go:143] libmachine: (functional-244936) DBG | domain functional-244936 has defined MAC address 52:54:00:4b:5f:1f in network mk-functional-244936
I1019 16:34:08.021764  287629 main.go:143] libmachine: (functional-244936) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:5f:1f", ip: ""} in network mk-functional-244936: {Iface:virbr1 ExpiryTime:2025-10-19 17:31:38 +0000 UTC Type:0 Mac:52:54:00:4b:5f:1f Iaid: IPaddr:192.168.39.175 Prefix:24 Hostname:functional-244936 Clientid:01:52:54:00:4b:5f:1f}
I1019 16:34:08.021813  287629 main.go:143] libmachine: (functional-244936) DBG | domain functional-244936 has defined IP address 192.168.39.175 and MAC address 52:54:00:4b:5f:1f in network mk-functional-244936
I1019 16:34:08.022014  287629 main.go:143] libmachine: (functional-244936) Calling .GetSSHPort
I1019 16:34:08.022178  287629 main.go:143] libmachine: (functional-244936) Calling .GetSSHKeyPath
I1019 16:34:08.022328  287629 main.go:143] libmachine: (functional-244936) Calling .GetSSHUsername
I1019 16:34:08.022493  287629 sshutil.go:53] new ssh client: &{IP:192.168.39.175 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21683-274250/.minikube/machines/functional-244936/id_rsa Username:docker}
I1019 16:34:08.106836  287629 ssh_runner.go:195] Run: sudo crictl images --output json
I1019 16:34:08.153034  287629 main.go:143] libmachine: Making call to close driver server
I1019 16:34:08.153054  287629 main.go:143] libmachine: (functional-244936) Calling .Close
I1019 16:34:08.153331  287629 main.go:143] libmachine: Successfully made call to close driver server
I1019 16:34:08.153353  287629 main.go:143] libmachine: Making call to close connection to plugin binary
I1019 16:34:08.153364  287629 main.go:143] libmachine: Making call to close driver server
I1019 16:34:08.153366  287629 main.go:143] libmachine: (functional-244936) DBG | Closing plugin on server side
I1019 16:34:08.153373  287629 main.go:143] libmachine: (functional-244936) Calling .Close
I1019 16:34:08.153622  287629 main.go:143] libmachine: (functional-244936) DBG | Closing plugin on server side
I1019 16:34:08.153664  287629 main.go:143] libmachine: Successfully made call to close driver server
I1019 16:34:08.153689  287629 main.go:143] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-244936 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-244936 image ls --format yaml --alsologtostderr:
- id: 5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests:
- docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb
- docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da
repoTags:
- docker.io/library/mysql:5.7
size: "519571821"
- id: 07ccdb7838758e758a4d52a9761636c385125a327355c0c94a6acff9babff938
repoDigests:
- docker.io/library/nginx@sha256:35fabd32a7582bed5da0a40f41fd4984df7ddff32f81cd6be4614d07240ec115
- docker.io/library/nginx@sha256:3b7732505933ca591ce4a6d860cb713ad96a3176b82f7979a8dfa9973486a0d6
repoTags:
- docker.io/library/nginx:latest
size: "163615579"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: ce781865d086ba6fc9334ee0ab7cdb2a9a915d3ed77c824f5ece95fb4ecc90c1
repoDigests:
- localhost/minikube-local-cache-test@sha256:7ba17b96c2fe57af9f5c19cd62441a5a0afb2024b8966809f1b7bd15ec77d23e
repoTags:
- localhost/minikube-local-cache-test:functional-244936
size: "3330"
- id: 52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998
- registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c
repoTags:
- registry.k8s.io/coredns/coredns:v1.12.1
size: "76103547"
- id: fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7
repoDigests:
- registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a
- registry.k8s.io/kube-proxy@sha256:9e876d245c76f0e3529c82bb103b60a59c4e190317827f977ab696cc4f43020a
repoTags:
- registry.k8s.io/kube-proxy:v1.34.1
size: "73138073"
- id: 409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
- docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "109379124"
- id: 5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115
repoDigests:
- registry.k8s.io/etcd@sha256:71170330936954286be203a7737459f2838dd71cc79f8ffaac91548a9e079b8f
- registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19
repoTags:
- registry.k8s.io/etcd:3.6.4-0
size: "195976448"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests:
- docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6
- docker.io/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86
- docker.io/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
- localhost/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6
- localhost/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86
- localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
repoTags:
- docker.io/kicbase/echo-server:latest
- localhost/kicbase/echo-server:functional-244936
size: "4945146"
- id: c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:264da1e0ab552e24b2eb034a1b75745df78fe8903bade1fa0f874f9167dad964
- registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902
repoTags:
- registry.k8s.io/kube-apiserver:v1.34.1
size: "89046001"
- id: cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
- registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41
repoTags:
- registry.k8s.io/pause:3.10.1
size: "742092"
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "43824855"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89
- registry.k8s.io/kube-controller-manager@sha256:a6fe41965f1693c8a73ebe75e215d0b7c0902732c66c6692b0dbcfb0f077c992
repoTags:
- registry.k8s.io/kube-controller-manager:v1.34.1
size: "76004181"
- id: 7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:47306e2178d9766fe3fe9eada02fa995f9f29dcbf518832293dfbe16964e2d31
- registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500
repoTags:
- registry.k8s.io/kube-scheduler:v1.34.1
size: "53844823"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-244936 image ls --format yaml --alsologtostderr:
I1019 16:34:07.752776  287578 out.go:360] Setting OutFile to fd 1 ...
I1019 16:34:07.753069  287578 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1019 16:34:07.753080  287578 out.go:374] Setting ErrFile to fd 2...
I1019 16:34:07.753087  287578 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1019 16:34:07.753274  287578 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21683-274250/.minikube/bin
I1019 16:34:07.753826  287578 config.go:182] Loaded profile config "functional-244936": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1019 16:34:07.753937  287578 config.go:182] Loaded profile config "functional-244936": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1019 16:34:07.754353  287578 main.go:143] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1019 16:34:07.754431  287578 main.go:143] libmachine: Launching plugin server for driver kvm2
I1019 16:34:07.768508  287578 main.go:143] libmachine: Plugin server listening at address 127.0.0.1:38093
I1019 16:34:07.768991  287578 main.go:143] libmachine: () Calling .GetVersion
I1019 16:34:07.769480  287578 main.go:143] libmachine: Using API Version  1
I1019 16:34:07.769501  287578 main.go:143] libmachine: () Calling .SetConfigRaw
I1019 16:34:07.769899  287578 main.go:143] libmachine: () Calling .GetMachineName
I1019 16:34:07.770130  287578 main.go:143] libmachine: (functional-244936) Calling .GetState
I1019 16:34:07.772607  287578 main.go:143] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1019 16:34:07.772642  287578 main.go:143] libmachine: Launching plugin server for driver kvm2
I1019 16:34:07.787802  287578 main.go:143] libmachine: Plugin server listening at address 127.0.0.1:35609
I1019 16:34:07.788205  287578 main.go:143] libmachine: () Calling .GetVersion
I1019 16:34:07.788671  287578 main.go:143] libmachine: Using API Version  1
I1019 16:34:07.788694  287578 main.go:143] libmachine: () Calling .SetConfigRaw
I1019 16:34:07.789129  287578 main.go:143] libmachine: () Calling .GetMachineName
I1019 16:34:07.789345  287578 main.go:143] libmachine: (functional-244936) Calling .DriverName
I1019 16:34:07.789606  287578 ssh_runner.go:195] Run: systemctl --version
I1019 16:34:07.789635  287578 main.go:143] libmachine: (functional-244936) Calling .GetSSHHostname
I1019 16:34:07.792468  287578 main.go:143] libmachine: (functional-244936) DBG | domain functional-244936 has defined MAC address 52:54:00:4b:5f:1f in network mk-functional-244936
I1019 16:34:07.792906  287578 main.go:143] libmachine: (functional-244936) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:5f:1f", ip: ""} in network mk-functional-244936: {Iface:virbr1 ExpiryTime:2025-10-19 17:31:38 +0000 UTC Type:0 Mac:52:54:00:4b:5f:1f Iaid: IPaddr:192.168.39.175 Prefix:24 Hostname:functional-244936 Clientid:01:52:54:00:4b:5f:1f}
I1019 16:34:07.792945  287578 main.go:143] libmachine: (functional-244936) DBG | domain functional-244936 has defined IP address 192.168.39.175 and MAC address 52:54:00:4b:5f:1f in network mk-functional-244936
I1019 16:34:07.793052  287578 main.go:143] libmachine: (functional-244936) Calling .GetSSHPort
I1019 16:34:07.793242  287578 main.go:143] libmachine: (functional-244936) Calling .GetSSHKeyPath
I1019 16:34:07.793408  287578 main.go:143] libmachine: (functional-244936) Calling .GetSSHUsername
I1019 16:34:07.793586  287578 sshutil.go:53] new ssh client: &{IP:192.168.39.175 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21683-274250/.minikube/machines/functional-244936/id_rsa Username:docker}
I1019 16:34:07.879551  287578 ssh_runner.go:195] Run: sudo crictl images --output json
I1019 16:34:07.929234  287578 main.go:143] libmachine: Making call to close driver server
I1019 16:34:07.929251  287578 main.go:143] libmachine: (functional-244936) Calling .Close
I1019 16:34:07.929511  287578 main.go:143] libmachine: Successfully made call to close driver server
I1019 16:34:07.929546  287578 main.go:143] libmachine: Making call to close connection to plugin binary
I1019 16:34:07.929557  287578 main.go:143] libmachine: Making call to close driver server
I1019 16:34:07.929558  287578 main.go:143] libmachine: (functional-244936) DBG | Closing plugin on server side
I1019 16:34:07.929571  287578 main.go:143] libmachine: (functional-244936) Calling .Close
I1019 16:34:07.929837  287578 main.go:143] libmachine: (functional-244936) DBG | Closing plugin on server side
I1019 16:34:07.929839  287578 main.go:143] libmachine: Successfully made call to close driver server
I1019 16:34:07.929874  287578 main.go:143] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (5.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-amd64 -p functional-244936 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-244936 ssh pgrep buildkitd: exit status 1 (215.18924ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-amd64 -p functional-244936 image build -t localhost/my-image:functional-244936 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-amd64 -p functional-244936 image build -t localhost/my-image:functional-244936 testdata/build --alsologtostderr: (4.968438134s)
functional_test.go:335: (dbg) Stdout: out/minikube-linux-amd64 -p functional-244936 image build -t localhost/my-image:functional-244936 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 978f260d651
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-244936
--> a5382731bc8
Successfully tagged localhost/my-image:functional-244936
a5382731bc8ddb33e38c507facfdad8eda9ca4dc8fbae615144a451258b23c22
functional_test.go:338: (dbg) Stderr: out/minikube-linux-amd64 -p functional-244936 image build -t localhost/my-image:functional-244936 testdata/build --alsologtostderr:
I1019 16:34:08.185804  287674 out.go:360] Setting OutFile to fd 1 ...
I1019 16:34:08.186230  287674 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1019 16:34:08.186245  287674 out.go:374] Setting ErrFile to fd 2...
I1019 16:34:08.186251  287674 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1019 16:34:08.186552  287674 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21683-274250/.minikube/bin
I1019 16:34:08.187339  287674 config.go:182] Loaded profile config "functional-244936": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1019 16:34:08.188199  287674 config.go:182] Loaded profile config "functional-244936": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1019 16:34:08.188722  287674 main.go:143] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1019 16:34:08.188775  287674 main.go:143] libmachine: Launching plugin server for driver kvm2
I1019 16:34:08.204153  287674 main.go:143] libmachine: Plugin server listening at address 127.0.0.1:42169
I1019 16:34:08.204684  287674 main.go:143] libmachine: () Calling .GetVersion
I1019 16:34:08.205323  287674 main.go:143] libmachine: Using API Version  1
I1019 16:34:08.205353  287674 main.go:143] libmachine: () Calling .SetConfigRaw
I1019 16:34:08.205711  287674 main.go:143] libmachine: () Calling .GetMachineName
I1019 16:34:08.205923  287674 main.go:143] libmachine: (functional-244936) Calling .GetState
I1019 16:34:08.208039  287674 main.go:143] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1019 16:34:08.208097  287674 main.go:143] libmachine: Launching plugin server for driver kvm2
I1019 16:34:08.221840  287674 main.go:143] libmachine: Plugin server listening at address 127.0.0.1:44733
I1019 16:34:08.222343  287674 main.go:143] libmachine: () Calling .GetVersion
I1019 16:34:08.222875  287674 main.go:143] libmachine: Using API Version  1
I1019 16:34:08.222896  287674 main.go:143] libmachine: () Calling .SetConfigRaw
I1019 16:34:08.223267  287674 main.go:143] libmachine: () Calling .GetMachineName
I1019 16:34:08.223455  287674 main.go:143] libmachine: (functional-244936) Calling .DriverName
I1019 16:34:08.223639  287674 ssh_runner.go:195] Run: systemctl --version
I1019 16:34:08.223670  287674 main.go:143] libmachine: (functional-244936) Calling .GetSSHHostname
I1019 16:34:08.227053  287674 main.go:143] libmachine: (functional-244936) DBG | domain functional-244936 has defined MAC address 52:54:00:4b:5f:1f in network mk-functional-244936
I1019 16:34:08.227476  287674 main.go:143] libmachine: (functional-244936) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:5f:1f", ip: ""} in network mk-functional-244936: {Iface:virbr1 ExpiryTime:2025-10-19 17:31:38 +0000 UTC Type:0 Mac:52:54:00:4b:5f:1f Iaid: IPaddr:192.168.39.175 Prefix:24 Hostname:functional-244936 Clientid:01:52:54:00:4b:5f:1f}
I1019 16:34:08.227508  287674 main.go:143] libmachine: (functional-244936) DBG | domain functional-244936 has defined IP address 192.168.39.175 and MAC address 52:54:00:4b:5f:1f in network mk-functional-244936
I1019 16:34:08.227687  287674 main.go:143] libmachine: (functional-244936) Calling .GetSSHPort
I1019 16:34:08.227852  287674 main.go:143] libmachine: (functional-244936) Calling .GetSSHKeyPath
I1019 16:34:08.228003  287674 main.go:143] libmachine: (functional-244936) Calling .GetSSHUsername
I1019 16:34:08.228149  287674 sshutil.go:53] new ssh client: &{IP:192.168.39.175 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21683-274250/.minikube/machines/functional-244936/id_rsa Username:docker}
I1019 16:34:08.314431  287674 build_images.go:162] Building image from path: /tmp/build.2857123107.tar
I1019 16:34:08.314494  287674 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1019 16:34:08.325870  287674 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.2857123107.tar
I1019 16:34:08.331269  287674 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.2857123107.tar: stat -c "%s %y" /var/lib/minikube/build/build.2857123107.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.2857123107.tar': No such file or directory
I1019 16:34:08.331301  287674 ssh_runner.go:362] scp /tmp/build.2857123107.tar --> /var/lib/minikube/build/build.2857123107.tar (3072 bytes)
I1019 16:34:08.364518  287674 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.2857123107
I1019 16:34:08.379006  287674 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.2857123107 -xf /var/lib/minikube/build/build.2857123107.tar
I1019 16:34:08.392942  287674 crio.go:315] Building image: /var/lib/minikube/build/build.2857123107
I1019 16:34:08.393022  287674 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-244936 /var/lib/minikube/build/build.2857123107 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I1019 16:34:13.073776  287674 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-244936 /var/lib/minikube/build/build.2857123107 --cgroup-manager=cgroupfs: (4.680718207s)
I1019 16:34:13.073857  287674 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.2857123107
I1019 16:34:13.086180  287674 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.2857123107.tar
I1019 16:34:13.097896  287674 build_images.go:218] Built localhost/my-image:functional-244936 from /tmp/build.2857123107.tar
I1019 16:34:13.097943  287674 build_images.go:134] succeeded building to: functional-244936
I1019 16:34:13.097951  287674 build_images.go:135] failed building to: 
I1019 16:34:13.097998  287674 main.go:143] libmachine: Making call to close driver server
I1019 16:34:13.098015  287674 main.go:143] libmachine: (functional-244936) Calling .Close
I1019 16:34:13.098376  287674 main.go:143] libmachine: Successfully made call to close driver server
I1019 16:34:13.098395  287674 main.go:143] libmachine: Making call to close connection to plugin binary
I1019 16:34:13.098404  287674 main.go:143] libmachine: Making call to close driver server
I1019 16:34:13.098412  287674 main.go:143] libmachine: (functional-244936) Calling .Close
I1019 16:34:13.098655  287674 main.go:143] libmachine: Successfully made call to close driver server
I1019 16:34:13.098672  287674 main.go:143] libmachine: Making call to close connection to plugin binary
I1019 16:34:13.098726  287674 main.go:143] libmachine: (functional-244936) DBG | Closing plugin on server side
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-244936 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (5.45s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.95s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:357: (dbg) Done: docker pull kicbase/echo-server:1.0: (1.928920543s)
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-244936
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.95s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-244936 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-244936 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-244936 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-amd64 -p functional-244936 image load --daemon kicbase/echo-server:functional-244936 --alsologtostderr
functional_test.go:370: (dbg) Done: out/minikube-linux-amd64 -p functional-244936 image load --daemon kicbase/echo-server:functional-244936 --alsologtostderr: (1.03516945s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-244936 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.86s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-244936 image load --daemon kicbase/echo-server:functional-244936 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-244936 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.86s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (2.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-244936
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-244936 image load --daemon kicbase/echo-server:functional-244936 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-244936 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (2.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-amd64 -p functional-244936 image save kicbase/echo-server:functional-244936 /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
I1019 16:33:41.685682  278280 detect.go:223] nested VM detected
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.73s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-amd64 -p functional-244936 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-amd64 -p functional-244936 service list -o json
functional_test.go:1504: Took "313.398231ms" to run "out/minikube-linux-amd64 -p functional-244936 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-amd64 -p functional-244936 service --namespace=default --https --url hello-node
functional_test.go:1532: found endpoint: https://192.168.39.175:32419
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-amd64 -p functional-244936 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-amd64 -p functional-244936 service hello-node --url
functional_test.go:1575: found endpoint for hello-node: http://192.168.39.175:32419
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (4.92s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-244936 image load /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:424: (dbg) Done: out/minikube-linux-amd64 -p functional-244936 image load /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr: (4.656416995s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-244936 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (4.92s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-244936
functional_test.go:439: (dbg) Run:  out/minikube-linux-amd64 -p functional-244936 image save --daemon kicbase/echo-server:functional-244936 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-244936
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.62s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1330: Took "279.006585ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1344: Took "55.630879ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1381: Took "403.947385ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1394: Took "50.899494ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (12.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-244936 /tmp/TestFunctionalparallelMountCmdany-port926760171/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1760891632268357212" to /tmp/TestFunctionalparallelMountCmdany-port926760171/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1760891632268357212" to /tmp/TestFunctionalparallelMountCmdany-port926760171/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1760891632268357212" to /tmp/TestFunctionalparallelMountCmdany-port926760171/001/test-1760891632268357212
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-244936 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-244936 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (200.689878ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1019 16:33:52.469345  278280 retry.go:31] will retry after 501.009432ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-244936 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-244936 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Oct 19 16:33 created-by-test
-rw-r--r-- 1 docker docker 24 Oct 19 16:33 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Oct 19 16:33 test-1760891632268357212
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-244936 ssh cat /mount-9p/test-1760891632268357212
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-244936 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:352: "busybox-mount" [94ac1eff-3ef8-4440-874b-d4608fede930] Pending
helpers_test.go:352: "busybox-mount" [94ac1eff-3ef8-4440-874b-d4608fede930] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:352: "busybox-mount" [94ac1eff-3ef8-4440-874b-d4608fede930] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "busybox-mount" [94ac1eff-3ef8-4440-874b-d4608fede930] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 10.00357134s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-244936 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-244936 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-244936 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-244936 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-244936 /tmp/TestFunctionalparallelMountCmdany-port926760171/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (12.56s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-244936 /tmp/TestFunctionalparallelMountCmdspecific-port3927049710/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-244936 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-244936 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (227.660725ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1019 16:34:05.055035  278280 retry.go:31] will retry after 299.12461ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-244936 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-244936 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-244936 /tmp/TestFunctionalparallelMountCmdspecific-port3927049710/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
I1019 16:34:05.812042  278280 detect.go:223] nested VM detected
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-244936 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-244936 ssh "sudo umount -f /mount-9p": exit status 1 (263.653067ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-244936 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-244936 /tmp/TestFunctionalparallelMountCmdspecific-port3927049710/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.65s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-244936 /tmp/TestFunctionalparallelMountCmdVerifyCleanup574064601/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-244936 /tmp/TestFunctionalparallelMountCmdVerifyCleanup574064601/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-244936 /tmp/TestFunctionalparallelMountCmdVerifyCleanup574064601/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-244936 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-244936 ssh "findmnt -T" /mount1: exit status 1 (293.76877ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1019 16:34:06.774710  278280 retry.go:31] will retry after 271.512778ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-244936 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-244936 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-244936 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-244936 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-244936 /tmp/TestFunctionalparallelMountCmdVerifyCleanup574064601/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-244936 /tmp/TestFunctionalparallelMountCmdVerifyCleanup574064601/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-244936 /tmp/TestFunctionalparallelMountCmdVerifyCleanup574064601/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.23s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-244936
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-244936
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-244936
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (206.54s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 -p ha-626163 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
E1019 16:34:58.888575  278280 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-274250/.minikube/profiles/addons-305823/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 16:35:26.600203  278280 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-274250/.minikube/profiles/addons-305823/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 -p ha-626163 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (3m25.874416469s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-626163 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/StartCluster (206.54s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (6.8s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 -p ha-626163 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 -p ha-626163 kubectl -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 -p ha-626163 kubectl -- rollout status deployment/busybox: (4.662670878s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-626163 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 -p ha-626163 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-626163 kubectl -- exec busybox-7b57f96db7-c4p6p -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-626163 kubectl -- exec busybox-7b57f96db7-gphsb -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-626163 kubectl -- exec busybox-7b57f96db7-m4qx7 -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-626163 kubectl -- exec busybox-7b57f96db7-c4p6p -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-626163 kubectl -- exec busybox-7b57f96db7-gphsb -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-626163 kubectl -- exec busybox-7b57f96db7-m4qx7 -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-626163 kubectl -- exec busybox-7b57f96db7-c4p6p -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-626163 kubectl -- exec busybox-7b57f96db7-gphsb -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-626163 kubectl -- exec busybox-7b57f96db7-m4qx7 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (6.80s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.17s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 -p ha-626163 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-626163 kubectl -- exec busybox-7b57f96db7-c4p6p -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-626163 kubectl -- exec busybox-7b57f96db7-c4p6p -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-626163 kubectl -- exec busybox-7b57f96db7-gphsb -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-626163 kubectl -- exec busybox-7b57f96db7-gphsb -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-626163 kubectl -- exec busybox-7b57f96db7-m4qx7 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-626163 kubectl -- exec busybox-7b57f96db7-m4qx7 -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.17s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (49.66s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 -p ha-626163 node add --alsologtostderr -v 5
E1019 16:38:34.096954  278280 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-274250/.minikube/profiles/functional-244936/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 16:38:34.103384  278280 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-274250/.minikube/profiles/functional-244936/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 16:38:34.114850  278280 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-274250/.minikube/profiles/functional-244936/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 16:38:34.136265  278280 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-274250/.minikube/profiles/functional-244936/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 16:38:34.177757  278280 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-274250/.minikube/profiles/functional-244936/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 16:38:34.259174  278280 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-274250/.minikube/profiles/functional-244936/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 16:38:34.420747  278280 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-274250/.minikube/profiles/functional-244936/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 16:38:34.742503  278280 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-274250/.minikube/profiles/functional-244936/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 16:38:35.384253  278280 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-274250/.minikube/profiles/functional-244936/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 16:38:36.666422  278280 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-274250/.minikube/profiles/functional-244936/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 16:38:39.228372  278280 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-274250/.minikube/profiles/functional-244936/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 -p ha-626163 node add --alsologtostderr -v 5: (48.802535951s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-626163 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (49.66s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-626163 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.86s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
E1019 16:38:44.350534  278280 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-274250/.minikube/profiles/functional-244936/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.86s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (12.93s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-amd64 -p ha-626163 status --output json --alsologtostderr -v 5
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-626163 cp testdata/cp-test.txt ha-626163:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-626163 ssh -n ha-626163 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-626163 cp ha-626163:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile405420740/001/cp-test_ha-626163.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-626163 ssh -n ha-626163 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-626163 cp ha-626163:/home/docker/cp-test.txt ha-626163-m02:/home/docker/cp-test_ha-626163_ha-626163-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-626163 ssh -n ha-626163 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-626163 ssh -n ha-626163-m02 "sudo cat /home/docker/cp-test_ha-626163_ha-626163-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-626163 cp ha-626163:/home/docker/cp-test.txt ha-626163-m03:/home/docker/cp-test_ha-626163_ha-626163-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-626163 ssh -n ha-626163 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-626163 ssh -n ha-626163-m03 "sudo cat /home/docker/cp-test_ha-626163_ha-626163-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-626163 cp ha-626163:/home/docker/cp-test.txt ha-626163-m04:/home/docker/cp-test_ha-626163_ha-626163-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-626163 ssh -n ha-626163 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-626163 ssh -n ha-626163-m04 "sudo cat /home/docker/cp-test_ha-626163_ha-626163-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-626163 cp testdata/cp-test.txt ha-626163-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-626163 ssh -n ha-626163-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-626163 cp ha-626163-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile405420740/001/cp-test_ha-626163-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-626163 ssh -n ha-626163-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-626163 cp ha-626163-m02:/home/docker/cp-test.txt ha-626163:/home/docker/cp-test_ha-626163-m02_ha-626163.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-626163 ssh -n ha-626163-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-626163 ssh -n ha-626163 "sudo cat /home/docker/cp-test_ha-626163-m02_ha-626163.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-626163 cp ha-626163-m02:/home/docker/cp-test.txt ha-626163-m03:/home/docker/cp-test_ha-626163-m02_ha-626163-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-626163 ssh -n ha-626163-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-626163 ssh -n ha-626163-m03 "sudo cat /home/docker/cp-test_ha-626163-m02_ha-626163-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-626163 cp ha-626163-m02:/home/docker/cp-test.txt ha-626163-m04:/home/docker/cp-test_ha-626163-m02_ha-626163-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-626163 ssh -n ha-626163-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-626163 ssh -n ha-626163-m04 "sudo cat /home/docker/cp-test_ha-626163-m02_ha-626163-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-626163 cp testdata/cp-test.txt ha-626163-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-626163 ssh -n ha-626163-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-626163 cp ha-626163-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile405420740/001/cp-test_ha-626163-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-626163 ssh -n ha-626163-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-626163 cp ha-626163-m03:/home/docker/cp-test.txt ha-626163:/home/docker/cp-test_ha-626163-m03_ha-626163.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-626163 ssh -n ha-626163-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-626163 ssh -n ha-626163 "sudo cat /home/docker/cp-test_ha-626163-m03_ha-626163.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-626163 cp ha-626163-m03:/home/docker/cp-test.txt ha-626163-m02:/home/docker/cp-test_ha-626163-m03_ha-626163-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-626163 ssh -n ha-626163-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-626163 ssh -n ha-626163-m02 "sudo cat /home/docker/cp-test_ha-626163-m03_ha-626163-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-626163 cp ha-626163-m03:/home/docker/cp-test.txt ha-626163-m04:/home/docker/cp-test_ha-626163-m03_ha-626163-m04.txt
E1019 16:38:54.591872  278280 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-274250/.minikube/profiles/functional-244936/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-626163 ssh -n ha-626163-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-626163 ssh -n ha-626163-m04 "sudo cat /home/docker/cp-test_ha-626163-m03_ha-626163-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-626163 cp testdata/cp-test.txt ha-626163-m04:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-626163 ssh -n ha-626163-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-626163 cp ha-626163-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile405420740/001/cp-test_ha-626163-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-626163 ssh -n ha-626163-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-626163 cp ha-626163-m04:/home/docker/cp-test.txt ha-626163:/home/docker/cp-test_ha-626163-m04_ha-626163.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-626163 ssh -n ha-626163-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-626163 ssh -n ha-626163 "sudo cat /home/docker/cp-test_ha-626163-m04_ha-626163.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-626163 cp ha-626163-m04:/home/docker/cp-test.txt ha-626163-m02:/home/docker/cp-test_ha-626163-m04_ha-626163-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-626163 ssh -n ha-626163-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-626163 ssh -n ha-626163-m02 "sudo cat /home/docker/cp-test_ha-626163-m04_ha-626163-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-626163 cp ha-626163-m04:/home/docker/cp-test.txt ha-626163-m03:/home/docker/cp-test_ha-626163-m04_ha-626163-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-626163 ssh -n ha-626163-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-626163 ssh -n ha-626163-m03 "sudo cat /home/docker/cp-test_ha-626163-m04_ha-626163-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (12.93s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (90.46s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p ha-626163 node stop m02 --alsologtostderr -v 5
E1019 16:39:15.073949  278280 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-274250/.minikube/profiles/functional-244936/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 16:39:56.035429  278280 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-274250/.minikube/profiles/functional-244936/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 16:39:58.890160  278280 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-274250/.minikube/profiles/addons-305823/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:365: (dbg) Done: out/minikube-linux-amd64 -p ha-626163 node stop m02 --alsologtostderr -v 5: (1m29.831250549s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-amd64 -p ha-626163 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-626163 status --alsologtostderr -v 5: exit status 7 (627.712602ms)

                                                
                                                
-- stdout --
	ha-626163
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-626163-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-626163-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-626163-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1019 16:40:27.929256  292404 out.go:360] Setting OutFile to fd 1 ...
	I1019 16:40:27.929456  292404 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 16:40:27.929476  292404 out.go:374] Setting ErrFile to fd 2...
	I1019 16:40:27.929481  292404 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 16:40:27.929661  292404 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21683-274250/.minikube/bin
	I1019 16:40:27.929861  292404 out.go:368] Setting JSON to false
	I1019 16:40:27.929889  292404 mustload.go:66] Loading cluster: ha-626163
	I1019 16:40:27.930073  292404 notify.go:221] Checking for updates...
	I1019 16:40:27.930374  292404 config.go:182] Loaded profile config "ha-626163": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 16:40:27.930394  292404 status.go:174] checking status of ha-626163 ...
	I1019 16:40:27.930906  292404 main.go:143] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1019 16:40:27.930954  292404 main.go:143] libmachine: Launching plugin server for driver kvm2
	I1019 16:40:27.950692  292404 main.go:143] libmachine: Plugin server listening at address 127.0.0.1:42053
	I1019 16:40:27.951150  292404 main.go:143] libmachine: () Calling .GetVersion
	I1019 16:40:27.951732  292404 main.go:143] libmachine: Using API Version  1
	I1019 16:40:27.951760  292404 main.go:143] libmachine: () Calling .SetConfigRaw
	I1019 16:40:27.952191  292404 main.go:143] libmachine: () Calling .GetMachineName
	I1019 16:40:27.952411  292404 main.go:143] libmachine: (ha-626163) Calling .GetState
	I1019 16:40:27.954614  292404 status.go:371] ha-626163 host status = "Running" (err=<nil>)
	I1019 16:40:27.954633  292404 host.go:66] Checking if "ha-626163" exists ...
	I1019 16:40:27.955124  292404 main.go:143] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1019 16:40:27.955186  292404 main.go:143] libmachine: Launching plugin server for driver kvm2
	I1019 16:40:27.969724  292404 main.go:143] libmachine: Plugin server listening at address 127.0.0.1:33301
	I1019 16:40:27.970382  292404 main.go:143] libmachine: () Calling .GetVersion
	I1019 16:40:27.970862  292404 main.go:143] libmachine: Using API Version  1
	I1019 16:40:27.970876  292404 main.go:143] libmachine: () Calling .SetConfigRaw
	I1019 16:40:27.971325  292404 main.go:143] libmachine: () Calling .GetMachineName
	I1019 16:40:27.971534  292404 main.go:143] libmachine: (ha-626163) Calling .GetIP
	I1019 16:40:27.974869  292404 main.go:143] libmachine: (ha-626163) DBG | domain ha-626163 has defined MAC address 52:54:00:e5:15:b4 in network mk-ha-626163
	I1019 16:40:27.975468  292404 main.go:143] libmachine: (ha-626163) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:15:b4", ip: ""} in network mk-ha-626163: {Iface:virbr1 ExpiryTime:2025-10-19 17:34:34 +0000 UTC Type:0 Mac:52:54:00:e5:15:b4 Iaid: IPaddr:192.168.39.148 Prefix:24 Hostname:ha-626163 Clientid:01:52:54:00:e5:15:b4}
	I1019 16:40:27.975501  292404 main.go:143] libmachine: (ha-626163) DBG | domain ha-626163 has defined IP address 192.168.39.148 and MAC address 52:54:00:e5:15:b4 in network mk-ha-626163
	I1019 16:40:27.975704  292404 host.go:66] Checking if "ha-626163" exists ...
	I1019 16:40:27.976081  292404 main.go:143] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1019 16:40:27.976123  292404 main.go:143] libmachine: Launching plugin server for driver kvm2
	I1019 16:40:27.990794  292404 main.go:143] libmachine: Plugin server listening at address 127.0.0.1:43073
	I1019 16:40:27.991305  292404 main.go:143] libmachine: () Calling .GetVersion
	I1019 16:40:27.991709  292404 main.go:143] libmachine: Using API Version  1
	I1019 16:40:27.991727  292404 main.go:143] libmachine: () Calling .SetConfigRaw
	I1019 16:40:27.992069  292404 main.go:143] libmachine: () Calling .GetMachineName
	I1019 16:40:27.992235  292404 main.go:143] libmachine: (ha-626163) Calling .DriverName
	I1019 16:40:27.992421  292404 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1019 16:40:27.992451  292404 main.go:143] libmachine: (ha-626163) Calling .GetSSHHostname
	I1019 16:40:27.995531  292404 main.go:143] libmachine: (ha-626163) DBG | domain ha-626163 has defined MAC address 52:54:00:e5:15:b4 in network mk-ha-626163
	I1019 16:40:27.996046  292404 main.go:143] libmachine: (ha-626163) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:15:b4", ip: ""} in network mk-ha-626163: {Iface:virbr1 ExpiryTime:2025-10-19 17:34:34 +0000 UTC Type:0 Mac:52:54:00:e5:15:b4 Iaid: IPaddr:192.168.39.148 Prefix:24 Hostname:ha-626163 Clientid:01:52:54:00:e5:15:b4}
	I1019 16:40:27.996084  292404 main.go:143] libmachine: (ha-626163) DBG | domain ha-626163 has defined IP address 192.168.39.148 and MAC address 52:54:00:e5:15:b4 in network mk-ha-626163
	I1019 16:40:27.996250  292404 main.go:143] libmachine: (ha-626163) Calling .GetSSHPort
	I1019 16:40:27.996420  292404 main.go:143] libmachine: (ha-626163) Calling .GetSSHKeyPath
	I1019 16:40:27.996586  292404 main.go:143] libmachine: (ha-626163) Calling .GetSSHUsername
	I1019 16:40:27.996725  292404 sshutil.go:53] new ssh client: &{IP:192.168.39.148 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21683-274250/.minikube/machines/ha-626163/id_rsa Username:docker}
	I1019 16:40:28.078603  292404 ssh_runner.go:195] Run: systemctl --version
	I1019 16:40:28.084915  292404 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1019 16:40:28.103224  292404 kubeconfig.go:125] found "ha-626163" server: "https://192.168.39.254:8443"
	I1019 16:40:28.103267  292404 api_server.go:166] Checking apiserver status ...
	I1019 16:40:28.103303  292404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1019 16:40:28.123760  292404 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1379/cgroup
	W1019 16:40:28.134901  292404 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1379/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1019 16:40:28.134962  292404 ssh_runner.go:195] Run: ls
	I1019 16:40:28.139691  292404 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I1019 16:40:28.144433  292404 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I1019 16:40:28.144467  292404 status.go:463] ha-626163 apiserver status = Running (err=<nil>)
	I1019 16:40:28.144480  292404 status.go:176] ha-626163 status: &{Name:ha-626163 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1019 16:40:28.144499  292404 status.go:174] checking status of ha-626163-m02 ...
	I1019 16:40:28.144919  292404 main.go:143] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1019 16:40:28.144977  292404 main.go:143] libmachine: Launching plugin server for driver kvm2
	I1019 16:40:28.159367  292404 main.go:143] libmachine: Plugin server listening at address 127.0.0.1:41899
	I1019 16:40:28.159802  292404 main.go:143] libmachine: () Calling .GetVersion
	I1019 16:40:28.160266  292404 main.go:143] libmachine: Using API Version  1
	I1019 16:40:28.160290  292404 main.go:143] libmachine: () Calling .SetConfigRaw
	I1019 16:40:28.160656  292404 main.go:143] libmachine: () Calling .GetMachineName
	I1019 16:40:28.160848  292404 main.go:143] libmachine: (ha-626163-m02) Calling .GetState
	I1019 16:40:28.162720  292404 status.go:371] ha-626163-m02 host status = "Stopped" (err=<nil>)
	I1019 16:40:28.162734  292404 status.go:384] host is not running, skipping remaining checks
	I1019 16:40:28.162739  292404 status.go:176] ha-626163-m02 status: &{Name:ha-626163-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1019 16:40:28.162756  292404 status.go:174] checking status of ha-626163-m03 ...
	I1019 16:40:28.163142  292404 main.go:143] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1019 16:40:28.163194  292404 main.go:143] libmachine: Launching plugin server for driver kvm2
	I1019 16:40:28.176067  292404 main.go:143] libmachine: Plugin server listening at address 127.0.0.1:43533
	I1019 16:40:28.176483  292404 main.go:143] libmachine: () Calling .GetVersion
	I1019 16:40:28.176958  292404 main.go:143] libmachine: Using API Version  1
	I1019 16:40:28.176977  292404 main.go:143] libmachine: () Calling .SetConfigRaw
	I1019 16:40:28.177258  292404 main.go:143] libmachine: () Calling .GetMachineName
	I1019 16:40:28.177483  292404 main.go:143] libmachine: (ha-626163-m03) Calling .GetState
	I1019 16:40:28.179008  292404 status.go:371] ha-626163-m03 host status = "Running" (err=<nil>)
	I1019 16:40:28.179023  292404 host.go:66] Checking if "ha-626163-m03" exists ...
	I1019 16:40:28.179376  292404 main.go:143] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1019 16:40:28.179413  292404 main.go:143] libmachine: Launching plugin server for driver kvm2
	I1019 16:40:28.192443  292404 main.go:143] libmachine: Plugin server listening at address 127.0.0.1:43009
	I1019 16:40:28.192824  292404 main.go:143] libmachine: () Calling .GetVersion
	I1019 16:40:28.193325  292404 main.go:143] libmachine: Using API Version  1
	I1019 16:40:28.193347  292404 main.go:143] libmachine: () Calling .SetConfigRaw
	I1019 16:40:28.193679  292404 main.go:143] libmachine: () Calling .GetMachineName
	I1019 16:40:28.193879  292404 main.go:143] libmachine: (ha-626163-m03) Calling .GetIP
	I1019 16:40:28.196847  292404 main.go:143] libmachine: (ha-626163-m03) DBG | domain ha-626163-m03 has defined MAC address 52:54:00:30:44:80 in network mk-ha-626163
	I1019 16:40:28.197311  292404 main.go:143] libmachine: (ha-626163-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:44:80", ip: ""} in network mk-ha-626163: {Iface:virbr1 ExpiryTime:2025-10-19 17:36:41 +0000 UTC Type:0 Mac:52:54:00:30:44:80 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:ha-626163-m03 Clientid:01:52:54:00:30:44:80}
	I1019 16:40:28.197336  292404 main.go:143] libmachine: (ha-626163-m03) DBG | domain ha-626163-m03 has defined IP address 192.168.39.92 and MAC address 52:54:00:30:44:80 in network mk-ha-626163
	I1019 16:40:28.197516  292404 host.go:66] Checking if "ha-626163-m03" exists ...
	I1019 16:40:28.197783  292404 main.go:143] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1019 16:40:28.197817  292404 main.go:143] libmachine: Launching plugin server for driver kvm2
	I1019 16:40:28.210752  292404 main.go:143] libmachine: Plugin server listening at address 127.0.0.1:33643
	I1019 16:40:28.211197  292404 main.go:143] libmachine: () Calling .GetVersion
	I1019 16:40:28.211621  292404 main.go:143] libmachine: Using API Version  1
	I1019 16:40:28.211643  292404 main.go:143] libmachine: () Calling .SetConfigRaw
	I1019 16:40:28.211975  292404 main.go:143] libmachine: () Calling .GetMachineName
	I1019 16:40:28.212167  292404 main.go:143] libmachine: (ha-626163-m03) Calling .DriverName
	I1019 16:40:28.212369  292404 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1019 16:40:28.212390  292404 main.go:143] libmachine: (ha-626163-m03) Calling .GetSSHHostname
	I1019 16:40:28.215211  292404 main.go:143] libmachine: (ha-626163-m03) DBG | domain ha-626163-m03 has defined MAC address 52:54:00:30:44:80 in network mk-ha-626163
	I1019 16:40:28.215633  292404 main.go:143] libmachine: (ha-626163-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:44:80", ip: ""} in network mk-ha-626163: {Iface:virbr1 ExpiryTime:2025-10-19 17:36:41 +0000 UTC Type:0 Mac:52:54:00:30:44:80 Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:ha-626163-m03 Clientid:01:52:54:00:30:44:80}
	I1019 16:40:28.215658  292404 main.go:143] libmachine: (ha-626163-m03) DBG | domain ha-626163-m03 has defined IP address 192.168.39.92 and MAC address 52:54:00:30:44:80 in network mk-ha-626163
	I1019 16:40:28.215817  292404 main.go:143] libmachine: (ha-626163-m03) Calling .GetSSHPort
	I1019 16:40:28.216001  292404 main.go:143] libmachine: (ha-626163-m03) Calling .GetSSHKeyPath
	I1019 16:40:28.216154  292404 main.go:143] libmachine: (ha-626163-m03) Calling .GetSSHUsername
	I1019 16:40:28.216298  292404 sshutil.go:53] new ssh client: &{IP:192.168.39.92 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21683-274250/.minikube/machines/ha-626163-m03/id_rsa Username:docker}
	I1019 16:40:28.296531  292404 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1019 16:40:28.312958  292404 kubeconfig.go:125] found "ha-626163" server: "https://192.168.39.254:8443"
	I1019 16:40:28.313001  292404 api_server.go:166] Checking apiserver status ...
	I1019 16:40:28.313037  292404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1019 16:40:28.330998  292404 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1736/cgroup
	W1019 16:40:28.342539  292404 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1736/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1019 16:40:28.342602  292404 ssh_runner.go:195] Run: ls
	I1019 16:40:28.347860  292404 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I1019 16:40:28.352437  292404 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I1019 16:40:28.352460  292404 status.go:463] ha-626163-m03 apiserver status = Running (err=<nil>)
	I1019 16:40:28.352472  292404 status.go:176] ha-626163-m03 status: &{Name:ha-626163-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1019 16:40:28.352492  292404 status.go:174] checking status of ha-626163-m04 ...
	I1019 16:40:28.352807  292404 main.go:143] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1019 16:40:28.352859  292404 main.go:143] libmachine: Launching plugin server for driver kvm2
	I1019 16:40:28.366360  292404 main.go:143] libmachine: Plugin server listening at address 127.0.0.1:41193
	I1019 16:40:28.366871  292404 main.go:143] libmachine: () Calling .GetVersion
	I1019 16:40:28.367362  292404 main.go:143] libmachine: Using API Version  1
	I1019 16:40:28.367385  292404 main.go:143] libmachine: () Calling .SetConfigRaw
	I1019 16:40:28.367670  292404 main.go:143] libmachine: () Calling .GetMachineName
	I1019 16:40:28.367867  292404 main.go:143] libmachine: (ha-626163-m04) Calling .GetState
	I1019 16:40:28.369746  292404 status.go:371] ha-626163-m04 host status = "Running" (err=<nil>)
	I1019 16:40:28.369763  292404 host.go:66] Checking if "ha-626163-m04" exists ...
	I1019 16:40:28.370219  292404 main.go:143] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1019 16:40:28.370254  292404 main.go:143] libmachine: Launching plugin server for driver kvm2
	I1019 16:40:28.384042  292404 main.go:143] libmachine: Plugin server listening at address 127.0.0.1:40737
	I1019 16:40:28.384477  292404 main.go:143] libmachine: () Calling .GetVersion
	I1019 16:40:28.384897  292404 main.go:143] libmachine: Using API Version  1
	I1019 16:40:28.384918  292404 main.go:143] libmachine: () Calling .SetConfigRaw
	I1019 16:40:28.385232  292404 main.go:143] libmachine: () Calling .GetMachineName
	I1019 16:40:28.385421  292404 main.go:143] libmachine: (ha-626163-m04) Calling .GetIP
	I1019 16:40:28.388725  292404 main.go:143] libmachine: (ha-626163-m04) DBG | domain ha-626163-m04 has defined MAC address 52:54:00:5a:cd:80 in network mk-ha-626163
	I1019 16:40:28.389236  292404 main.go:143] libmachine: (ha-626163-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:cd:80", ip: ""} in network mk-ha-626163: {Iface:virbr1 ExpiryTime:2025-10-19 17:38:12 +0000 UTC Type:0 Mac:52:54:00:5a:cd:80 Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:ha-626163-m04 Clientid:01:52:54:00:5a:cd:80}
	I1019 16:40:28.389275  292404 main.go:143] libmachine: (ha-626163-m04) DBG | domain ha-626163-m04 has defined IP address 192.168.39.205 and MAC address 52:54:00:5a:cd:80 in network mk-ha-626163
	I1019 16:40:28.389467  292404 host.go:66] Checking if "ha-626163-m04" exists ...
	I1019 16:40:28.389853  292404 main.go:143] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1019 16:40:28.389910  292404 main.go:143] libmachine: Launching plugin server for driver kvm2
	I1019 16:40:28.403619  292404 main.go:143] libmachine: Plugin server listening at address 127.0.0.1:34613
	I1019 16:40:28.404056  292404 main.go:143] libmachine: () Calling .GetVersion
	I1019 16:40:28.404513  292404 main.go:143] libmachine: Using API Version  1
	I1019 16:40:28.404537  292404 main.go:143] libmachine: () Calling .SetConfigRaw
	I1019 16:40:28.404862  292404 main.go:143] libmachine: () Calling .GetMachineName
	I1019 16:40:28.405048  292404 main.go:143] libmachine: (ha-626163-m04) Calling .DriverName
	I1019 16:40:28.405282  292404 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1019 16:40:28.405308  292404 main.go:143] libmachine: (ha-626163-m04) Calling .GetSSHHostname
	I1019 16:40:28.408549  292404 main.go:143] libmachine: (ha-626163-m04) DBG | domain ha-626163-m04 has defined MAC address 52:54:00:5a:cd:80 in network mk-ha-626163
	I1019 16:40:28.409201  292404 main.go:143] libmachine: (ha-626163-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:cd:80", ip: ""} in network mk-ha-626163: {Iface:virbr1 ExpiryTime:2025-10-19 17:38:12 +0000 UTC Type:0 Mac:52:54:00:5a:cd:80 Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:ha-626163-m04 Clientid:01:52:54:00:5a:cd:80}
	I1019 16:40:28.409222  292404 main.go:143] libmachine: (ha-626163-m04) DBG | domain ha-626163-m04 has defined IP address 192.168.39.205 and MAC address 52:54:00:5a:cd:80 in network mk-ha-626163
	I1019 16:40:28.409445  292404 main.go:143] libmachine: (ha-626163-m04) Calling .GetSSHPort
	I1019 16:40:28.409616  292404 main.go:143] libmachine: (ha-626163-m04) Calling .GetSSHKeyPath
	I1019 16:40:28.409757  292404 main.go:143] libmachine: (ha-626163-m04) Calling .GetSSHUsername
	I1019 16:40:28.409888  292404 sshutil.go:53] new ssh client: &{IP:192.168.39.205 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21683-274250/.minikube/machines/ha-626163-m04/id_rsa Username:docker}
	I1019 16:40:28.490530  292404 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1019 16:40:28.506645  292404 status.go:176] ha-626163-m04 status: &{Name:ha-626163-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (90.46s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.65s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.65s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (33.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p ha-626163 node start m02 --alsologtostderr -v 5
ha_test.go:422: (dbg) Done: out/minikube-linux-amd64 -p ha-626163 node start m02 --alsologtostderr -v 5: (31.944050116s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-626163 status --alsologtostderr -v 5
ha_test.go:430: (dbg) Done: out/minikube-linux-amd64 -p ha-626163 status --alsologtostderr -v 5: (1.079846752s)
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (33.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.12s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (1.114839089s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.12s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (376.78s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-amd64 -p ha-626163 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-linux-amd64 -p ha-626163 stop --alsologtostderr -v 5
E1019 16:41:17.958196  278280 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-274250/.minikube/profiles/functional-244936/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 16:43:34.097046  278280 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-274250/.minikube/profiles/functional-244936/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 16:44:01.800730  278280 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-274250/.minikube/profiles/functional-244936/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 16:44:58.892054  278280 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-274250/.minikube/profiles/addons-305823/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:464: (dbg) Done: out/minikube-linux-amd64 -p ha-626163 stop --alsologtostderr -v 5: (4m18.508182106s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-amd64 -p ha-626163 start --wait true --alsologtostderr -v 5
E1019 16:46:21.964430  278280 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-274250/.minikube/profiles/addons-305823/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:469: (dbg) Done: out/minikube-linux-amd64 -p ha-626163 start --wait true --alsologtostderr -v 5: (1m58.140286386s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-amd64 -p ha-626163 node list --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (376.78s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (18.56s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-amd64 -p ha-626163 node delete m03 --alsologtostderr -v 5
ha_test.go:489: (dbg) Done: out/minikube-linux-amd64 -p ha-626163 node delete m03 --alsologtostderr -v 5: (17.814562493s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-amd64 -p ha-626163 status --alsologtostderr -v 5
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (18.56s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.63s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.63s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (251.84s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p ha-626163 stop --alsologtostderr -v 5
E1019 16:48:34.100775  278280 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-274250/.minikube/profiles/functional-244936/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 16:49:58.892209  278280 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-274250/.minikube/profiles/addons-305823/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:533: (dbg) Done: out/minikube-linux-amd64 -p ha-626163 stop --alsologtostderr -v 5: (4m11.724411025s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-amd64 -p ha-626163 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-626163 status --alsologtostderr -v 5: exit status 7 (111.01925ms)

                                                
                                                
-- stdout --
	ha-626163
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-626163-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-626163-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1019 16:51:51.113954  296230 out.go:360] Setting OutFile to fd 1 ...
	I1019 16:51:51.114242  296230 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 16:51:51.114250  296230 out.go:374] Setting ErrFile to fd 2...
	I1019 16:51:51.114254  296230 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 16:51:51.114451  296230 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21683-274250/.minikube/bin
	I1019 16:51:51.114605  296230 out.go:368] Setting JSON to false
	I1019 16:51:51.114628  296230 mustload.go:66] Loading cluster: ha-626163
	I1019 16:51:51.114707  296230 notify.go:221] Checking for updates...
	I1019 16:51:51.115006  296230 config.go:182] Loaded profile config "ha-626163": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 16:51:51.115023  296230 status.go:174] checking status of ha-626163 ...
	I1019 16:51:51.115473  296230 main.go:143] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1019 16:51:51.115510  296230 main.go:143] libmachine: Launching plugin server for driver kvm2
	I1019 16:51:51.138651  296230 main.go:143] libmachine: Plugin server listening at address 127.0.0.1:44831
	I1019 16:51:51.139086  296230 main.go:143] libmachine: () Calling .GetVersion
	I1019 16:51:51.139592  296230 main.go:143] libmachine: Using API Version  1
	I1019 16:51:51.139616  296230 main.go:143] libmachine: () Calling .SetConfigRaw
	I1019 16:51:51.140075  296230 main.go:143] libmachine: () Calling .GetMachineName
	I1019 16:51:51.140276  296230 main.go:143] libmachine: (ha-626163) Calling .GetState
	I1019 16:51:51.142113  296230 status.go:371] ha-626163 host status = "Stopped" (err=<nil>)
	I1019 16:51:51.142129  296230 status.go:384] host is not running, skipping remaining checks
	I1019 16:51:51.142134  296230 status.go:176] ha-626163 status: &{Name:ha-626163 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1019 16:51:51.142153  296230 status.go:174] checking status of ha-626163-m02 ...
	I1019 16:51:51.142507  296230 main.go:143] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1019 16:51:51.142559  296230 main.go:143] libmachine: Launching plugin server for driver kvm2
	I1019 16:51:51.155506  296230 main.go:143] libmachine: Plugin server listening at address 127.0.0.1:45199
	I1019 16:51:51.155954  296230 main.go:143] libmachine: () Calling .GetVersion
	I1019 16:51:51.156422  296230 main.go:143] libmachine: Using API Version  1
	I1019 16:51:51.156446  296230 main.go:143] libmachine: () Calling .SetConfigRaw
	I1019 16:51:51.156781  296230 main.go:143] libmachine: () Calling .GetMachineName
	I1019 16:51:51.156988  296230 main.go:143] libmachine: (ha-626163-m02) Calling .GetState
	I1019 16:51:51.158737  296230 status.go:371] ha-626163-m02 host status = "Stopped" (err=<nil>)
	I1019 16:51:51.158754  296230 status.go:384] host is not running, skipping remaining checks
	I1019 16:51:51.158760  296230 status.go:176] ha-626163-m02 status: &{Name:ha-626163-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1019 16:51:51.158780  296230 status.go:174] checking status of ha-626163-m04 ...
	I1019 16:51:51.159074  296230 main.go:143] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1019 16:51:51.159116  296230 main.go:143] libmachine: Launching plugin server for driver kvm2
	I1019 16:51:51.171835  296230 main.go:143] libmachine: Plugin server listening at address 127.0.0.1:33251
	I1019 16:51:51.172336  296230 main.go:143] libmachine: () Calling .GetVersion
	I1019 16:51:51.172760  296230 main.go:143] libmachine: Using API Version  1
	I1019 16:51:51.172782  296230 main.go:143] libmachine: () Calling .SetConfigRaw
	I1019 16:51:51.173125  296230 main.go:143] libmachine: () Calling .GetMachineName
	I1019 16:51:51.173310  296230 main.go:143] libmachine: (ha-626163-m04) Calling .GetState
	I1019 16:51:51.174898  296230 status.go:371] ha-626163-m04 host status = "Stopped" (err=<nil>)
	I1019 16:51:51.174914  296230 status.go:384] host is not running, skipping remaining checks
	I1019 16:51:51.174921  296230 status.go:176] ha-626163-m04 status: &{Name:ha-626163-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (251.84s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (98.6s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-amd64 -p ha-626163 start --wait true --alsologtostderr -v 5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
ha_test.go:562: (dbg) Done: out/minikube-linux-amd64 -p ha-626163 start --wait true --alsologtostderr -v 5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m37.869994052s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-amd64 -p ha-626163 status --alsologtostderr -v 5
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (98.60s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.62s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.62s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (80.6s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-amd64 -p ha-626163 node add --control-plane --alsologtostderr -v 5
E1019 16:53:34.097191  278280 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-274250/.minikube/profiles/functional-244936/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:607: (dbg) Done: out/minikube-linux-amd64 -p ha-626163 node add --control-plane --alsologtostderr -v 5: (1m19.750200272s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-amd64 -p ha-626163 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (80.60s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.86s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.86s)

                                                
                                    
x
+
TestJSONOutput/start/Command (79.21s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-468666 --output=json --user=testUser --memory=3072 --wait=true --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
E1019 16:54:57.163949  278280 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-274250/.minikube/profiles/functional-244936/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 16:54:58.888133  278280 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-274250/.minikube/profiles/addons-305823/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-468666 --output=json --user=testUser --memory=3072 --wait=true --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m19.212232195s)
--- PASS: TestJSONOutput/start/Command (79.21s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.71s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-468666 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.71s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.63s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-468666 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.63s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (6.88s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-468666 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-468666 --output=json --user=testUser: (6.876850325s)
--- PASS: TestJSONOutput/stop/Command (6.88s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.2s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-820786 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-820786 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (66.170138ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"34671485-a2b5-4807-8261-be1d67946087","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-820786] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"c8558c15-795e-497c-9c51-f4dde07cb849","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21683"}}
	{"specversion":"1.0","id":"d5cb58a9-29fa-4bd9-8225-e10ff02008c4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"b03aa81a-1f99-4ed8-8ddd-a94a03fffed6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21683-274250/kubeconfig"}}
	{"specversion":"1.0","id":"00862e38-19bd-4927-a56e-9d091d706169","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21683-274250/.minikube"}}
	{"specversion":"1.0","id":"a56bb67d-03bb-47a8-a961-f9c19b08dcc4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"73702083-3987-436d-8438-8c74f42502b7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"a7a9eb76-3363-4e75-8ab5-812100fd9d62","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-820786" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-820786
--- PASS: TestErrorJSONOutput (0.20s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:70: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (78.56s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-630449 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-630449 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (37.547042246s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-633399 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-633399 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (38.213812918s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-630449
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-633399
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-633399" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-633399
helpers_test.go:175: Cleaning up "first-630449" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-630449
--- PASS: TestMinikubeProfile (78.56s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (20.13s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-073759 --memory=3072 --mount-string /tmp/TestMountStartserial1989417955/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-073759 --memory=3072 --mount-string /tmp/TestMountStartserial1989417955/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (19.130532366s)
--- PASS: TestMountStart/serial/StartWithMountFirst (20.13s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.38s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-073759 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-073759 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.38s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (20.78s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-091118 --memory=3072 --mount-string /tmp/TestMountStartserial1989417955/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-091118 --memory=3072 --mount-string /tmp/TestMountStartserial1989417955/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (19.775958901s)
--- PASS: TestMountStart/serial/StartWithMountSecond (20.78s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.38s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-091118 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-091118 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.38s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.73s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-073759 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.73s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.38s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-091118 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-091118 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.38s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.27s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-091118
mount_start_test.go:196: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-091118: (1.271716577s)
--- PASS: TestMountStart/serial/Stop (1.27s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (19.63s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-091118
E1019 16:58:34.101554  278280 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-274250/.minikube/profiles/functional-244936/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
mount_start_test.go:207: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-091118: (18.633385536s)
--- PASS: TestMountStart/serial/RestartStopped (19.63s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.38s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-091118 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-091118 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.38s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (126.75s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-470285 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
E1019 16:59:58.888129  278280 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-274250/.minikube/profiles/addons-305823/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-470285 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (2m6.33029697s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-470285 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (126.75s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (5.45s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-470285 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-470285 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-470285 -- rollout status deployment/busybox: (4.000098584s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-470285 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-470285 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-470285 -- exec busybox-7b57f96db7-9vsl5 -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-470285 -- exec busybox-7b57f96db7-ckspx -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-470285 -- exec busybox-7b57f96db7-9vsl5 -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-470285 -- exec busybox-7b57f96db7-ckspx -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-470285 -- exec busybox-7b57f96db7-9vsl5 -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-470285 -- exec busybox-7b57f96db7-ckspx -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (5.45s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.77s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-470285 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-470285 -- exec busybox-7b57f96db7-9vsl5 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-470285 -- exec busybox-7b57f96db7-9vsl5 -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-470285 -- exec busybox-7b57f96db7-ckspx -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-470285 -- exec busybox-7b57f96db7-ckspx -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.77s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (41.44s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-470285 -v=5 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-470285 -v=5 --alsologtostderr: (40.875412215s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-470285 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (41.44s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-470285 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.57s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.57s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (7.19s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-470285 status --output json --alsologtostderr
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-470285 cp testdata/cp-test.txt multinode-470285:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-470285 ssh -n multinode-470285 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-470285 cp multinode-470285:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile204354579/001/cp-test_multinode-470285.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-470285 ssh -n multinode-470285 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-470285 cp multinode-470285:/home/docker/cp-test.txt multinode-470285-m02:/home/docker/cp-test_multinode-470285_multinode-470285-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-470285 ssh -n multinode-470285 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-470285 ssh -n multinode-470285-m02 "sudo cat /home/docker/cp-test_multinode-470285_multinode-470285-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-470285 cp multinode-470285:/home/docker/cp-test.txt multinode-470285-m03:/home/docker/cp-test_multinode-470285_multinode-470285-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-470285 ssh -n multinode-470285 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-470285 ssh -n multinode-470285-m03 "sudo cat /home/docker/cp-test_multinode-470285_multinode-470285-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-470285 cp testdata/cp-test.txt multinode-470285-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-470285 ssh -n multinode-470285-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-470285 cp multinode-470285-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile204354579/001/cp-test_multinode-470285-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-470285 ssh -n multinode-470285-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-470285 cp multinode-470285-m02:/home/docker/cp-test.txt multinode-470285:/home/docker/cp-test_multinode-470285-m02_multinode-470285.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-470285 ssh -n multinode-470285-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-470285 ssh -n multinode-470285 "sudo cat /home/docker/cp-test_multinode-470285-m02_multinode-470285.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-470285 cp multinode-470285-m02:/home/docker/cp-test.txt multinode-470285-m03:/home/docker/cp-test_multinode-470285-m02_multinode-470285-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-470285 ssh -n multinode-470285-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-470285 ssh -n multinode-470285-m03 "sudo cat /home/docker/cp-test_multinode-470285-m02_multinode-470285-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-470285 cp testdata/cp-test.txt multinode-470285-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-470285 ssh -n multinode-470285-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-470285 cp multinode-470285-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile204354579/001/cp-test_multinode-470285-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-470285 ssh -n multinode-470285-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-470285 cp multinode-470285-m03:/home/docker/cp-test.txt multinode-470285:/home/docker/cp-test_multinode-470285-m03_multinode-470285.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-470285 ssh -n multinode-470285-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-470285 ssh -n multinode-470285 "sudo cat /home/docker/cp-test_multinode-470285-m03_multinode-470285.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-470285 cp multinode-470285-m03:/home/docker/cp-test.txt multinode-470285-m02:/home/docker/cp-test_multinode-470285-m03_multinode-470285-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-470285 ssh -n multinode-470285-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-470285 ssh -n multinode-470285-m02 "sudo cat /home/docker/cp-test_multinode-470285-m03_multinode-470285-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (7.19s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.46s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-470285 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-470285 node stop m03: (1.625413791s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-470285 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-470285 status: exit status 7 (421.929072ms)

                                                
                                                
-- stdout --
	multinode-470285
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-470285-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-470285-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-470285 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-470285 status --alsologtostderr: exit status 7 (414.445712ms)

                                                
                                                
-- stdout --
	multinode-470285
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-470285-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-470285-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1019 17:01:51.108208  303910 out.go:360] Setting OutFile to fd 1 ...
	I1019 17:01:51.108468  303910 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 17:01:51.108477  303910 out.go:374] Setting ErrFile to fd 2...
	I1019 17:01:51.108481  303910 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 17:01:51.108719  303910 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21683-274250/.minikube/bin
	I1019 17:01:51.108940  303910 out.go:368] Setting JSON to false
	I1019 17:01:51.108971  303910 mustload.go:66] Loading cluster: multinode-470285
	I1019 17:01:51.109044  303910 notify.go:221] Checking for updates...
	I1019 17:01:51.109458  303910 config.go:182] Loaded profile config "multinode-470285": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 17:01:51.109479  303910 status.go:174] checking status of multinode-470285 ...
	I1019 17:01:51.110022  303910 main.go:143] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1019 17:01:51.110066  303910 main.go:143] libmachine: Launching plugin server for driver kvm2
	I1019 17:01:51.125079  303910 main.go:143] libmachine: Plugin server listening at address 127.0.0.1:34065
	I1019 17:01:51.125564  303910 main.go:143] libmachine: () Calling .GetVersion
	I1019 17:01:51.126117  303910 main.go:143] libmachine: Using API Version  1
	I1019 17:01:51.126140  303910 main.go:143] libmachine: () Calling .SetConfigRaw
	I1019 17:01:51.126516  303910 main.go:143] libmachine: () Calling .GetMachineName
	I1019 17:01:51.126715  303910 main.go:143] libmachine: (multinode-470285) Calling .GetState
	I1019 17:01:51.128665  303910 status.go:371] multinode-470285 host status = "Running" (err=<nil>)
	I1019 17:01:51.128683  303910 host.go:66] Checking if "multinode-470285" exists ...
	I1019 17:01:51.129005  303910 main.go:143] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1019 17:01:51.129053  303910 main.go:143] libmachine: Launching plugin server for driver kvm2
	I1019 17:01:51.142484  303910 main.go:143] libmachine: Plugin server listening at address 127.0.0.1:38557
	I1019 17:01:51.142883  303910 main.go:143] libmachine: () Calling .GetVersion
	I1019 17:01:51.143274  303910 main.go:143] libmachine: Using API Version  1
	I1019 17:01:51.143297  303910 main.go:143] libmachine: () Calling .SetConfigRaw
	I1019 17:01:51.143661  303910 main.go:143] libmachine: () Calling .GetMachineName
	I1019 17:01:51.143825  303910 main.go:143] libmachine: (multinode-470285) Calling .GetIP
	I1019 17:01:51.146917  303910 main.go:143] libmachine: (multinode-470285) DBG | domain multinode-470285 has defined MAC address 52:54:00:47:a0:c9 in network mk-multinode-470285
	I1019 17:01:51.147362  303910 main.go:143] libmachine: (multinode-470285) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:a0:c9", ip: ""} in network mk-multinode-470285: {Iface:virbr1 ExpiryTime:2025-10-19 17:59:01 +0000 UTC Type:0 Mac:52:54:00:47:a0:c9 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:multinode-470285 Clientid:01:52:54:00:47:a0:c9}
	I1019 17:01:51.147393  303910 main.go:143] libmachine: (multinode-470285) DBG | domain multinode-470285 has defined IP address 192.168.39.188 and MAC address 52:54:00:47:a0:c9 in network mk-multinode-470285
	I1019 17:01:51.147557  303910 host.go:66] Checking if "multinode-470285" exists ...
	I1019 17:01:51.147878  303910 main.go:143] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1019 17:01:51.147930  303910 main.go:143] libmachine: Launching plugin server for driver kvm2
	I1019 17:01:51.161535  303910 main.go:143] libmachine: Plugin server listening at address 127.0.0.1:40039
	I1019 17:01:51.161929  303910 main.go:143] libmachine: () Calling .GetVersion
	I1019 17:01:51.162395  303910 main.go:143] libmachine: Using API Version  1
	I1019 17:01:51.162421  303910 main.go:143] libmachine: () Calling .SetConfigRaw
	I1019 17:01:51.162791  303910 main.go:143] libmachine: () Calling .GetMachineName
	I1019 17:01:51.162998  303910 main.go:143] libmachine: (multinode-470285) Calling .DriverName
	I1019 17:01:51.163213  303910 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1019 17:01:51.163236  303910 main.go:143] libmachine: (multinode-470285) Calling .GetSSHHostname
	I1019 17:01:51.166120  303910 main.go:143] libmachine: (multinode-470285) DBG | domain multinode-470285 has defined MAC address 52:54:00:47:a0:c9 in network mk-multinode-470285
	I1019 17:01:51.166521  303910 main.go:143] libmachine: (multinode-470285) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:a0:c9", ip: ""} in network mk-multinode-470285: {Iface:virbr1 ExpiryTime:2025-10-19 17:59:01 +0000 UTC Type:0 Mac:52:54:00:47:a0:c9 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:multinode-470285 Clientid:01:52:54:00:47:a0:c9}
	I1019 17:01:51.166554  303910 main.go:143] libmachine: (multinode-470285) DBG | domain multinode-470285 has defined IP address 192.168.39.188 and MAC address 52:54:00:47:a0:c9 in network mk-multinode-470285
	I1019 17:01:51.166672  303910 main.go:143] libmachine: (multinode-470285) Calling .GetSSHPort
	I1019 17:01:51.166837  303910 main.go:143] libmachine: (multinode-470285) Calling .GetSSHKeyPath
	I1019 17:01:51.167015  303910 main.go:143] libmachine: (multinode-470285) Calling .GetSSHUsername
	I1019 17:01:51.167146  303910 sshutil.go:53] new ssh client: &{IP:192.168.39.188 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21683-274250/.minikube/machines/multinode-470285/id_rsa Username:docker}
	I1019 17:01:51.244521  303910 ssh_runner.go:195] Run: systemctl --version
	I1019 17:01:51.250139  303910 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1019 17:01:51.266511  303910 kubeconfig.go:125] found "multinode-470285" server: "https://192.168.39.188:8443"
	I1019 17:01:51.266544  303910 api_server.go:166] Checking apiserver status ...
	I1019 17:01:51.266574  303910 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1019 17:01:51.284089  303910 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1398/cgroup
	W1019 17:01:51.294692  303910 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1398/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1019 17:01:51.294735  303910 ssh_runner.go:195] Run: ls
	I1019 17:01:51.299295  303910 api_server.go:253] Checking apiserver healthz at https://192.168.39.188:8443/healthz ...
	I1019 17:01:51.304962  303910 api_server.go:279] https://192.168.39.188:8443/healthz returned 200:
	ok
	I1019 17:01:51.304995  303910 status.go:463] multinode-470285 apiserver status = Running (err=<nil>)
	I1019 17:01:51.305005  303910 status.go:176] multinode-470285 status: &{Name:multinode-470285 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1019 17:01:51.305020  303910 status.go:174] checking status of multinode-470285-m02 ...
	I1019 17:01:51.305299  303910 main.go:143] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1019 17:01:51.305339  303910 main.go:143] libmachine: Launching plugin server for driver kvm2
	I1019 17:01:51.319514  303910 main.go:143] libmachine: Plugin server listening at address 127.0.0.1:34389
	I1019 17:01:51.320040  303910 main.go:143] libmachine: () Calling .GetVersion
	I1019 17:01:51.320531  303910 main.go:143] libmachine: Using API Version  1
	I1019 17:01:51.320550  303910 main.go:143] libmachine: () Calling .SetConfigRaw
	I1019 17:01:51.320878  303910 main.go:143] libmachine: () Calling .GetMachineName
	I1019 17:01:51.321055  303910 main.go:143] libmachine: (multinode-470285-m02) Calling .GetState
	I1019 17:01:51.322608  303910 status.go:371] multinode-470285-m02 host status = "Running" (err=<nil>)
	I1019 17:01:51.322626  303910 host.go:66] Checking if "multinode-470285-m02" exists ...
	I1019 17:01:51.322952  303910 main.go:143] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1019 17:01:51.323010  303910 main.go:143] libmachine: Launching plugin server for driver kvm2
	I1019 17:01:51.336218  303910 main.go:143] libmachine: Plugin server listening at address 127.0.0.1:38333
	I1019 17:01:51.336582  303910 main.go:143] libmachine: () Calling .GetVersion
	I1019 17:01:51.337030  303910 main.go:143] libmachine: Using API Version  1
	I1019 17:01:51.337051  303910 main.go:143] libmachine: () Calling .SetConfigRaw
	I1019 17:01:51.337366  303910 main.go:143] libmachine: () Calling .GetMachineName
	I1019 17:01:51.337574  303910 main.go:143] libmachine: (multinode-470285-m02) Calling .GetIP
	I1019 17:01:51.340747  303910 main.go:143] libmachine: (multinode-470285-m02) DBG | domain multinode-470285-m02 has defined MAC address 52:54:00:66:13:6c in network mk-multinode-470285
	I1019 17:01:51.341321  303910 main.go:143] libmachine: (multinode-470285-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:66:13:6c", ip: ""} in network mk-multinode-470285: {Iface:virbr1 ExpiryTime:2025-10-19 18:00:25 +0000 UTC Type:0 Mac:52:54:00:66:13:6c Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:multinode-470285-m02 Clientid:01:52:54:00:66:13:6c}
	I1019 17:01:51.341350  303910 main.go:143] libmachine: (multinode-470285-m02) DBG | domain multinode-470285-m02 has defined IP address 192.168.39.107 and MAC address 52:54:00:66:13:6c in network mk-multinode-470285
	I1019 17:01:51.341559  303910 host.go:66] Checking if "multinode-470285-m02" exists ...
	I1019 17:01:51.342087  303910 main.go:143] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1019 17:01:51.342146  303910 main.go:143] libmachine: Launching plugin server for driver kvm2
	I1019 17:01:51.355385  303910 main.go:143] libmachine: Plugin server listening at address 127.0.0.1:45247
	I1019 17:01:51.355747  303910 main.go:143] libmachine: () Calling .GetVersion
	I1019 17:01:51.356181  303910 main.go:143] libmachine: Using API Version  1
	I1019 17:01:51.356212  303910 main.go:143] libmachine: () Calling .SetConfigRaw
	I1019 17:01:51.356556  303910 main.go:143] libmachine: () Calling .GetMachineName
	I1019 17:01:51.356751  303910 main.go:143] libmachine: (multinode-470285-m02) Calling .DriverName
	I1019 17:01:51.356941  303910 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1019 17:01:51.356963  303910 main.go:143] libmachine: (multinode-470285-m02) Calling .GetSSHHostname
	I1019 17:01:51.360031  303910 main.go:143] libmachine: (multinode-470285-m02) DBG | domain multinode-470285-m02 has defined MAC address 52:54:00:66:13:6c in network mk-multinode-470285
	I1019 17:01:51.360502  303910 main.go:143] libmachine: (multinode-470285-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:66:13:6c", ip: ""} in network mk-multinode-470285: {Iface:virbr1 ExpiryTime:2025-10-19 18:00:25 +0000 UTC Type:0 Mac:52:54:00:66:13:6c Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:multinode-470285-m02 Clientid:01:52:54:00:66:13:6c}
	I1019 17:01:51.360526  303910 main.go:143] libmachine: (multinode-470285-m02) DBG | domain multinode-470285-m02 has defined IP address 192.168.39.107 and MAC address 52:54:00:66:13:6c in network mk-multinode-470285
	I1019 17:01:51.360711  303910 main.go:143] libmachine: (multinode-470285-m02) Calling .GetSSHPort
	I1019 17:01:51.360892  303910 main.go:143] libmachine: (multinode-470285-m02) Calling .GetSSHKeyPath
	I1019 17:01:51.361063  303910 main.go:143] libmachine: (multinode-470285-m02) Calling .GetSSHUsername
	I1019 17:01:51.361247  303910 sshutil.go:53] new ssh client: &{IP:192.168.39.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21683-274250/.minikube/machines/multinode-470285-m02/id_rsa Username:docker}
	I1019 17:01:51.440249  303910 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1019 17:01:51.455016  303910 status.go:176] multinode-470285-m02 status: &{Name:multinode-470285-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1019 17:01:51.455051  303910 status.go:174] checking status of multinode-470285-m03 ...
	I1019 17:01:51.455399  303910 main.go:143] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1019 17:01:51.455444  303910 main.go:143] libmachine: Launching plugin server for driver kvm2
	I1019 17:01:51.471119  303910 main.go:143] libmachine: Plugin server listening at address 127.0.0.1:37631
	I1019 17:01:51.471551  303910 main.go:143] libmachine: () Calling .GetVersion
	I1019 17:01:51.472028  303910 main.go:143] libmachine: Using API Version  1
	I1019 17:01:51.472055  303910 main.go:143] libmachine: () Calling .SetConfigRaw
	I1019 17:01:51.472398  303910 main.go:143] libmachine: () Calling .GetMachineName
	I1019 17:01:51.472680  303910 main.go:143] libmachine: (multinode-470285-m03) Calling .GetState
	I1019 17:01:51.474295  303910 status.go:371] multinode-470285-m03 host status = "Stopped" (err=<nil>)
	I1019 17:01:51.474309  303910 status.go:384] host is not running, skipping remaining checks
	I1019 17:01:51.474316  303910 status.go:176] multinode-470285-m03 status: &{Name:multinode-470285-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.46s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (37.21s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-470285 node start m03 -v=5 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-470285 node start m03 -v=5 --alsologtostderr: (36.592221658s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-470285 status -v=5 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (37.21s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (303.38s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-470285
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-470285
E1019 17:03:01.968625  278280 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-274250/.minikube/profiles/addons-305823/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 17:03:34.101003  278280 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-274250/.minikube/profiles/functional-244936/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 17:04:58.891583  278280 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-274250/.minikube/profiles/addons-305823/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:321: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-470285: (2m58.043726504s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-470285 --wait=true -v=5 --alsologtostderr
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-470285 --wait=true -v=5 --alsologtostderr: (2m5.239980075s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-470285
--- PASS: TestMultiNode/serial/RestartKeepsNodes (303.38s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (2.74s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-470285 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-470285 node delete m03: (2.148582772s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-470285 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (2.74s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (162.97s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-470285 stop
E1019 17:08:34.096637  278280 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-274250/.minikube/profiles/functional-244936/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 17:09:58.892302  278280 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-274250/.minikube/profiles/addons-305823/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:345: (dbg) Done: out/minikube-linux-amd64 -p multinode-470285 stop: (2m42.794897759s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-470285 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-470285 status: exit status 7 (95.136034ms)

                                                
                                                
-- stdout --
	multinode-470285
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-470285-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-470285 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-470285 status --alsologtostderr: exit status 7 (81.846044ms)

                                                
                                                
-- stdout --
	multinode-470285
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-470285-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1019 17:10:17.741785  306706 out.go:360] Setting OutFile to fd 1 ...
	I1019 17:10:17.741907  306706 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 17:10:17.741917  306706 out.go:374] Setting ErrFile to fd 2...
	I1019 17:10:17.741921  306706 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 17:10:17.742175  306706 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21683-274250/.minikube/bin
	I1019 17:10:17.742365  306706 out.go:368] Setting JSON to false
	I1019 17:10:17.742395  306706 mustload.go:66] Loading cluster: multinode-470285
	I1019 17:10:17.742514  306706 notify.go:221] Checking for updates...
	I1019 17:10:17.742933  306706 config.go:182] Loaded profile config "multinode-470285": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 17:10:17.742952  306706 status.go:174] checking status of multinode-470285 ...
	I1019 17:10:17.743526  306706 main.go:143] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1019 17:10:17.743594  306706 main.go:143] libmachine: Launching plugin server for driver kvm2
	I1019 17:10:17.757043  306706 main.go:143] libmachine: Plugin server listening at address 127.0.0.1:38443
	I1019 17:10:17.757449  306706 main.go:143] libmachine: () Calling .GetVersion
	I1019 17:10:17.758015  306706 main.go:143] libmachine: Using API Version  1
	I1019 17:10:17.758044  306706 main.go:143] libmachine: () Calling .SetConfigRaw
	I1019 17:10:17.758430  306706 main.go:143] libmachine: () Calling .GetMachineName
	I1019 17:10:17.758641  306706 main.go:143] libmachine: (multinode-470285) Calling .GetState
	I1019 17:10:17.760483  306706 status.go:371] multinode-470285 host status = "Stopped" (err=<nil>)
	I1019 17:10:17.760500  306706 status.go:384] host is not running, skipping remaining checks
	I1019 17:10:17.760507  306706 status.go:176] multinode-470285 status: &{Name:multinode-470285 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1019 17:10:17.760555  306706 status.go:174] checking status of multinode-470285-m02 ...
	I1019 17:10:17.760874  306706 main.go:143] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1019 17:10:17.760924  306706 main.go:143] libmachine: Launching plugin server for driver kvm2
	I1019 17:10:17.773960  306706 main.go:143] libmachine: Plugin server listening at address 127.0.0.1:43945
	I1019 17:10:17.774372  306706 main.go:143] libmachine: () Calling .GetVersion
	I1019 17:10:17.774715  306706 main.go:143] libmachine: Using API Version  1
	I1019 17:10:17.774735  306706 main.go:143] libmachine: () Calling .SetConfigRaw
	I1019 17:10:17.775070  306706 main.go:143] libmachine: () Calling .GetMachineName
	I1019 17:10:17.775271  306706 main.go:143] libmachine: (multinode-470285-m02) Calling .GetState
	I1019 17:10:17.776748  306706 status.go:371] multinode-470285-m02 host status = "Stopped" (err=<nil>)
	I1019 17:10:17.776763  306706 status.go:384] host is not running, skipping remaining checks
	I1019 17:10:17.776770  306706 status.go:176] multinode-470285-m02 status: &{Name:multinode-470285-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (162.97s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (86.76s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-470285 --wait=true -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
E1019 17:11:37.165916  278280 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-274250/.minikube/profiles/functional-244936/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-470285 --wait=true -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m26.214977534s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-470285 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (86.76s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (39.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-470285
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-470285-m02 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-470285-m02 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: exit status 14 (65.223759ms)

                                                
                                                
-- stdout --
	* [multinode-470285-m02] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21683
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21683-274250/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21683-274250/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-470285-m02' is duplicated with machine name 'multinode-470285-m02' in profile 'multinode-470285'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-470285-m03 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-470285-m03 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (37.896603022s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-470285
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-470285: exit status 80 (217.070568ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-470285 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-470285-m03 already exists in multinode-470285-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-470285-m03
--- PASS: TestMultiNode/serial/ValidateNameConflict (39.10s)

                                                
                                    
x
+
TestScheduledStopUnix (107.09s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-593188 --memory=3072 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
E1019 17:14:58.892167  278280 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-274250/.minikube/profiles/addons-305823/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-593188 --memory=3072 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (35.371634048s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-593188 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-593188 -n scheduled-stop-593188
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-593188 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
I1019 17:15:18.253552  278280 retry.go:31] will retry after 129.558µs: open /home/jenkins/minikube-integration/21683-274250/.minikube/profiles/scheduled-stop-593188/pid: no such file or directory
I1019 17:15:18.254707  278280 retry.go:31] will retry after 108.343µs: open /home/jenkins/minikube-integration/21683-274250/.minikube/profiles/scheduled-stop-593188/pid: no such file or directory
I1019 17:15:18.255853  278280 retry.go:31] will retry after 209.514µs: open /home/jenkins/minikube-integration/21683-274250/.minikube/profiles/scheduled-stop-593188/pid: no such file or directory
I1019 17:15:18.256973  278280 retry.go:31] will retry after 231.914µs: open /home/jenkins/minikube-integration/21683-274250/.minikube/profiles/scheduled-stop-593188/pid: no such file or directory
I1019 17:15:18.258117  278280 retry.go:31] will retry after 297.553µs: open /home/jenkins/minikube-integration/21683-274250/.minikube/profiles/scheduled-stop-593188/pid: no such file or directory
I1019 17:15:18.259255  278280 retry.go:31] will retry after 483.673µs: open /home/jenkins/minikube-integration/21683-274250/.minikube/profiles/scheduled-stop-593188/pid: no such file or directory
I1019 17:15:18.260383  278280 retry.go:31] will retry after 1.072997ms: open /home/jenkins/minikube-integration/21683-274250/.minikube/profiles/scheduled-stop-593188/pid: no such file or directory
I1019 17:15:18.261546  278280 retry.go:31] will retry after 858.144µs: open /home/jenkins/minikube-integration/21683-274250/.minikube/profiles/scheduled-stop-593188/pid: no such file or directory
I1019 17:15:18.262683  278280 retry.go:31] will retry after 1.666932ms: open /home/jenkins/minikube-integration/21683-274250/.minikube/profiles/scheduled-stop-593188/pid: no such file or directory
I1019 17:15:18.264889  278280 retry.go:31] will retry after 4.079299ms: open /home/jenkins/minikube-integration/21683-274250/.minikube/profiles/scheduled-stop-593188/pid: no such file or directory
I1019 17:15:18.269034  278280 retry.go:31] will retry after 7.761431ms: open /home/jenkins/minikube-integration/21683-274250/.minikube/profiles/scheduled-stop-593188/pid: no such file or directory
I1019 17:15:18.277291  278280 retry.go:31] will retry after 6.928146ms: open /home/jenkins/minikube-integration/21683-274250/.minikube/profiles/scheduled-stop-593188/pid: no such file or directory
I1019 17:15:18.284617  278280 retry.go:31] will retry after 13.890831ms: open /home/jenkins/minikube-integration/21683-274250/.minikube/profiles/scheduled-stop-593188/pid: no such file or directory
I1019 17:15:18.298870  278280 retry.go:31] will retry after 13.016923ms: open /home/jenkins/minikube-integration/21683-274250/.minikube/profiles/scheduled-stop-593188/pid: no such file or directory
I1019 17:15:18.312181  278280 retry.go:31] will retry after 20.366734ms: open /home/jenkins/minikube-integration/21683-274250/.minikube/profiles/scheduled-stop-593188/pid: no such file or directory
I1019 17:15:18.333471  278280 retry.go:31] will retry after 27.707777ms: open /home/jenkins/minikube-integration/21683-274250/.minikube/profiles/scheduled-stop-593188/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-593188 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-593188 -n scheduled-stop-593188
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-593188
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-593188 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-593188
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-593188: exit status 7 (67.285601ms)

                                                
                                                
-- stdout --
	scheduled-stop-593188
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-593188 -n scheduled-stop-593188
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-593188 -n scheduled-stop-593188: exit status 7 (65.784069ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-593188" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-593188
--- PASS: TestScheduledStopUnix (107.09s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (77.01s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.32.0.2626442674 start -p running-upgrade-936006 --memory=3072 --vm-driver=kvm2  --container-runtime=crio --auto-update-drivers=false
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.32.0.2626442674 start -p running-upgrade-936006 --memory=3072 --vm-driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (51.608282211s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-936006 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
E1019 17:19:58.887710  278280 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-274250/.minikube/profiles/addons-305823/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-936006 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (21.631746489s)
helpers_test.go:175: Cleaning up "running-upgrade-936006" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-936006
--- PASS: TestRunningBinaryUpgrade (77.01s)

                                                
                                    
x
+
TestKubernetesUpgrade (154.55s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-755918 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-755918 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m3.694093117s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-755918
E1019 17:18:34.097268  278280 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-274250/.minikube/profiles/functional-244936/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-755918: (2.16102619s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-755918 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-755918 status --format={{.Host}}: exit status 7 (89.69489ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-755918 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-755918 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (36.238150113s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-755918 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-755918 --memory=3072 --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-755918 --memory=3072 --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: exit status 106 (90.744235ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-755918] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21683
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21683-274250/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21683-274250/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.34.1 cluster to v1.28.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.28.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-755918
	    minikube start -p kubernetes-upgrade-755918 --kubernetes-version=v1.28.0
	    
	    2) Create a second cluster with Kubernetes 1.28.0, by running:
	    
	    minikube start -p kubernetes-upgrade-7559182 --kubernetes-version=v1.28.0
	    
	    3) Use the existing cluster at version Kubernetes 1.34.1, by running:
	    
	    minikube start -p kubernetes-upgrade-755918 --kubernetes-version=v1.34.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-755918 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-755918 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (51.373236181s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-755918" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-755918
--- PASS: TestKubernetesUpgrade (154.55s)

                                                
                                    
x
+
TestPause/serial/Start (87.43s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-046984 --memory=3072 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-046984 --memory=3072 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m27.4253347s)
--- PASS: TestPause/serial/Start (87.43s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (3.06s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (3.06s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (105.95s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.32.0.231295762 start -p stopped-upgrade-254072 --memory=3072 --vm-driver=kvm2  --container-runtime=crio --auto-update-drivers=false
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.32.0.231295762 start -p stopped-upgrade-254072 --memory=3072 --vm-driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m2.795676614s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.32.0.231295762 -p stopped-upgrade-254072 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.32.0.231295762 -p stopped-upgrade-254072 stop: (1.572883665s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-254072 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
E1019 17:19:41.970480  278280 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-274250/.minikube/profiles/addons-305823/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-254072 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (41.584324784s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (105.95s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.02s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-254072
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-amd64 logs -p stopped-upgrade-254072: (1.024311009s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.61s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-162308 --memory=3072 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-162308 --memory=3072 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: exit status 14 (105.490296ms)

                                                
                                                
-- stdout --
	* [false-162308] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21683
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21683-274250/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21683-274250/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1019 17:20:21.505200  314999 out.go:360] Setting OutFile to fd 1 ...
	I1019 17:20:21.505308  314999 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 17:20:21.505316  314999 out.go:374] Setting ErrFile to fd 2...
	I1019 17:20:21.505320  314999 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1019 17:20:21.505528  314999 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21683-274250/.minikube/bin
	I1019 17:20:21.506002  314999 out.go:368] Setting JSON to false
	I1019 17:20:21.506883  314999 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":10963,"bootTime":1760883458,"procs":204,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1019 17:20:21.506977  314999 start.go:143] virtualization: kvm guest
	I1019 17:20:21.509068  314999 out.go:179] * [false-162308] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1019 17:20:21.510239  314999 out.go:179]   - MINIKUBE_LOCATION=21683
	I1019 17:20:21.510276  314999 notify.go:221] Checking for updates...
	I1019 17:20:21.512465  314999 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1019 17:20:21.513746  314999 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21683-274250/kubeconfig
	I1019 17:20:21.514824  314999 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21683-274250/.minikube
	I1019 17:20:21.515864  314999 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1019 17:20:21.516907  314999 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1019 17:20:21.518412  314999 config.go:182] Loaded profile config "cert-expiration-067580": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 17:20:21.518578  314999 config.go:182] Loaded profile config "cert-options-312332": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 17:20:21.518684  314999 config.go:182] Loaded profile config "force-systemd-flag-960537": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1019 17:20:21.518784  314999 driver.go:422] Setting default libvirt URI to qemu:///system
	I1019 17:20:21.555083  314999 out.go:179] * Using the kvm2 driver based on user configuration
	I1019 17:20:21.556290  314999 start.go:309] selected driver: kvm2
	I1019 17:20:21.556304  314999 start.go:930] validating driver "kvm2" against <nil>
	I1019 17:20:21.556315  314999 start.go:941] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1019 17:20:21.558066  314999 out.go:203] 
	W1019 17:20:21.559056  314999 out.go:285] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I1019 17:20:21.560060  314999 out.go:203] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-162308 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-162308

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-162308

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-162308

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-162308

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-162308

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-162308

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-162308

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-162308

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-162308

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-162308

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-162308" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-162308"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-162308" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-162308"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-162308" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-162308"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-162308

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-162308" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-162308"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-162308" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-162308"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-162308" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-162308" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-162308" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-162308" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-162308" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-162308" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-162308" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-162308" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-162308" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-162308"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-162308" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-162308"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-162308" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-162308"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-162308" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-162308"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-162308" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-162308"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-162308" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-162308" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-162308" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-162308" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-162308"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-162308" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-162308"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-162308" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-162308"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-162308" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-162308"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-162308" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-162308"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21683-274250/.minikube/ca.crt
extensions:
- extension:
last-update: Sun, 19 Oct 2025 17:18:07 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.72.127:8443
name: cert-expiration-067580
contexts:
- context:
cluster: cert-expiration-067580
extensions:
- extension:
last-update: Sun, 19 Oct 2025 17:18:07 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: cert-expiration-067580
name: cert-expiration-067580
current-context: ""
kind: Config
users:
- name: cert-expiration-067580
user:
client-certificate: /home/jenkins/minikube-integration/21683-274250/.minikube/profiles/cert-expiration-067580/client.crt
client-key: /home/jenkins/minikube-integration/21683-274250/.minikube/profiles/cert-expiration-067580/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-162308

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-162308" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-162308"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-162308" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-162308"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-162308" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-162308"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-162308" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-162308"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-162308" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-162308"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-162308" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-162308"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-162308" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-162308"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-162308" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-162308"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-162308" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-162308"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-162308" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-162308"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-162308" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-162308"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-162308" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-162308"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-162308" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-162308"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-162308" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-162308"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-162308" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-162308"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-162308" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-162308"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-162308" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-162308"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-162308" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-162308"

                                                
                                                
----------------------- debugLogs end: false-162308 [took: 3.298689787s] --------------------------------
helpers_test.go:175: Cleaning up "false-162308" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-162308
--- PASS: TestNetworkPlugins/group/false (3.61s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.07s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:85: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-710014 --no-kubernetes --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
no_kubernetes_test.go:85: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-710014 --no-kubernetes --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: exit status 14 (70.818502ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-710014] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21683
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21683-274250/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21683-274250/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.07s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (51.9s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:97: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-710014 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
no_kubernetes_test.go:97: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-710014 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (51.606805408s)
no_kubernetes_test.go:202: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-710014 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (51.90s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (109.21s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-519622 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.28.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-519622 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.28.0: (1m49.211040608s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (109.21s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (113.58s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-553278 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-553278 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.1: (1m53.583909632s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (113.58s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (35.63s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:114: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-710014 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
no_kubernetes_test.go:114: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-710014 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (34.329714441s)
no_kubernetes_test.go:202: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-710014 status -o json
no_kubernetes_test.go:202: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-710014 status -o json: exit status 2 (276.658951ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-710014","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:126: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-710014
no_kubernetes_test.go:126: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-710014: (1.020517752s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (35.63s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (25s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:138: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-710014 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
no_kubernetes_test.go:138: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-710014 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (25.003206078s)
--- PASS: TestNoKubernetes/serial/Start (25.00s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.22s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-710014 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-710014 "sudo systemctl is-active --quiet service kubelet": exit status 1 (224.258711ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 4

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.22s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (12.24s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:171: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:171: (dbg) Done: out/minikube-linux-amd64 profile list: (11.122831813s)
no_kubernetes_test.go:181: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
no_kubernetes_test.go:181: (dbg) Done: out/minikube-linux-amd64 profile list --output=json: (1.119621239s)
--- PASS: TestNoKubernetes/serial/ProfileList (12.24s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (11.37s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-519622 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [2bdfd321-7da6-4cb7-ae40-08529a7cb612] Pending
helpers_test.go:352: "busybox" [2bdfd321-7da6-4cb7-ae40-08529a7cb612] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [2bdfd321-7da6-4cb7-ae40-08529a7cb612] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 11.00424316s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-519622 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (11.37s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.31s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:160: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-710014
no_kubernetes_test.go:160: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-710014: (1.30789647s)
--- PASS: TestNoKubernetes/serial/Stop (1.31s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (82.46s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-695217 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-695217 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.1: (1m22.460994886s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (82.46s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (41.08s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:193: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-710014 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
no_kubernetes_test.go:193: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-710014 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (41.083211244s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (41.08s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.86s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-519622 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-519622 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.787068503s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-519622 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.86s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (88.14s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-519622 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-519622 --alsologtostderr -v=3: (1m28.142181722s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (88.14s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (10.38s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-553278 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [c9158fd4-7495-4096-917b-2f07ee285858] Pending
helpers_test.go:352: "busybox" [c9158fd4-7495-4096-917b-2f07ee285858] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [c9158fd4-7495-4096-917b-2f07ee285858] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 10.004720687s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-553278 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (10.38s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.11s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-553278 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p no-preload-553278 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.00693709s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-553278 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.11s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (86.51s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-553278 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-553278 --alsologtostderr -v=3: (1m26.509062037s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (86.51s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.2s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-710014 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-710014 "sudo systemctl is-active --quiet service kubelet": exit status 1 (199.503843ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 4

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.20s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (57.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-281360 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.1
E1019 17:23:34.096580  278280 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-274250/.minikube/profiles/functional-244936/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-281360 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.1: (57.009294124s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (57.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (11.27s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-695217 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [955b6f9e-7a37-44fa-be59-dc8750bd3c93] Pending
helpers_test.go:352: "busybox" [955b6f9e-7a37-44fa-be59-dc8750bd3c93] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [955b6f9e-7a37-44fa-be59-dc8750bd3c93] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 11.004448884s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-695217 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (11.27s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.96s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-695217 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-695217 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.96s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (87.83s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-695217 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-695217 --alsologtostderr -v=3: (1m27.826645865s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (87.83s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.29s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-281360 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [f84e6e6c-9e8d-487e-8474-d3e9cfb0b6f2] Pending
helpers_test.go:352: "busybox" [f84e6e6c-9e8d-487e-8474-d3e9cfb0b6f2] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [f84e6e6c-9e8d-487e-8474-d3e9cfb0b6f2] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 10.005073902s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-281360 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.29s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-519622 -n old-k8s-version-519622
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-519622 -n old-k8s-version-519622: exit status 7 (68.087147ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-519622 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (44.9s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-519622 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.28.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-519622 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.28.0: (44.575191726s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-519622 -n old-k8s-version-519622
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (44.90s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.98s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-281360 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-281360 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.98s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (84.12s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-281360 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-281360 --alsologtostderr -v=3: (1m24.122052389s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (84.12s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-553278 -n no-preload-553278
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-553278 -n no-preload-553278: exit status 7 (90.100542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-553278 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.23s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (61.72s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-553278 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.1
E1019 17:24:58.888227  278280 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-274250/.minikube/profiles/addons-305823/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-553278 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.1: (1m1.398891498s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-553278 -n no-preload-553278
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (61.72s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (14.03s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-cz9t6" [32bb3505-2d8a-4aef-a412-d4a04ed57093] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-cz9t6" [32bb3505-2d8a-4aef-a412-d4a04ed57093] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 14.020877541s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (14.03s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-cz9t6" [32bb3505-2d8a-4aef-a412-d4a04ed57093] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.009762722s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-519622 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-519622 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (3.06s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-519622 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-519622 -n old-k8s-version-519622
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-519622 -n old-k8s-version-519622: exit status 2 (280.427416ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-519622 -n old-k8s-version-519622
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-519622 -n old-k8s-version-519622: exit status 2 (273.162191ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-519622 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-519622 -n old-k8s-version-519622
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-519622 -n old-k8s-version-519622
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (3.06s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (47.77s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-277974 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-277974 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.1: (47.774575095s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (47.77s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.34s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-695217 -n embed-certs-695217
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-695217 -n embed-certs-695217: exit status 7 (80.560489ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-695217 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.34s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (53.21s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-695217 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-695217 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.1: (52.75135475s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-695217 -n embed-certs-695217
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (53.21s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-mg9rr" [469fc240-f26d-4e7c-a649-9c703273b1d3] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003613343s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.11s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-mg9rr" [469fc240-f26d-4e7c-a649-9c703273b1d3] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004852249s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-553278 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.11s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-553278 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (3.94s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-553278 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-amd64 pause -p no-preload-553278 --alsologtostderr -v=1: (1.004679003s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-553278 -n no-preload-553278
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-553278 -n no-preload-553278: exit status 2 (258.1172ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-553278 -n no-preload-553278
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-553278 -n no-preload-553278: exit status 2 (251.67937ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-553278 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-amd64 unpause -p no-preload-553278 --alsologtostderr -v=1: (1.136963582s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-553278 -n no-preload-553278
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-553278 -n no-preload-553278
--- PASS: TestStartStop/group/no-preload/serial/Pause (3.94s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-281360 -n default-k8s-diff-port-281360
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-281360 -n default-k8s-diff-port-281360: exit status 7 (91.724842ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-281360 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (55.8s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-281360 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-281360 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.1: (55.37762581s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-281360 -n default-k8s-diff-port-281360
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (55.80s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (114.61s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-162308 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-162308 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m54.606445525s)
--- PASS: TestNetworkPlugins/group/auto/Start (114.61s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.36s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-277974 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-277974 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.361847746s)
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.36s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (8.81s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-277974 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-277974 --alsologtostderr -v=3: (8.80845413s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (8.81s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-277974 -n newest-cni-277974
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-277974 -n newest-cni-277974: exit status 7 (80.590306ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-277974 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.23s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (56.71s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-277974 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-277974 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --auto-update-drivers=false --kubernetes-version=v1.34.1: (56.426952172s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-277974 -n newest-cni-277974
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (56.71s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (12.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-24txz" [275ebc74-652c-40ba-86be-dd51e81ea574] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-24txz" [275ebc74-652c-40ba-86be-dd51e81ea574] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 12.003789778s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (12.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-24txz" [275ebc74-652c-40ba-86be-dd51e81ea574] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004743781s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-695217 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (20.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-49cz5" [660edc70-6641-4d2e-b900-71dd8726a4b0] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-49cz5" [660edc70-6641-4d2e-b900-71dd8726a4b0] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 20.005252296s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (20.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.28s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-695217 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.28s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (4.12s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-695217 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-amd64 pause -p embed-certs-695217 --alsologtostderr -v=1: (1.043091388s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-695217 -n embed-certs-695217
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-695217 -n embed-certs-695217: exit status 2 (276.938549ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-695217 -n embed-certs-695217
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-695217 -n embed-certs-695217: exit status 2 (304.144636ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-695217 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-amd64 unpause -p embed-certs-695217 --alsologtostderr -v=1: (1.285851269s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-695217 -n embed-certs-695217
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-695217 -n embed-certs-695217
--- PASS: TestStartStop/group/embed-certs/serial/Pause (4.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (73.45s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-162308 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-162308 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m13.45046152s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (73.45s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.11s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-49cz5" [660edc70-6641-4d2e-b900-71dd8726a4b0] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004680128s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-281360 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.11s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.28s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-281360 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.28s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (3.4s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-281360 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-amd64 pause -p default-k8s-diff-port-281360 --alsologtostderr -v=1: (1.103063295s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-281360 -n default-k8s-diff-port-281360
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-281360 -n default-k8s-diff-port-281360: exit status 2 (310.231096ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-281360 -n default-k8s-diff-port-281360
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-281360 -n default-k8s-diff-port-281360: exit status 2 (309.526561ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-281360 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-281360 -n default-k8s-diff-port-281360
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-281360 -n default-k8s-diff-port-281360
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (3.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (79.9s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-162308 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-162308 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m19.904376151s)
--- PASS: TestNetworkPlugins/group/calico/Start (79.90s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-277974 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.61s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-277974 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-277974 -n newest-cni-277974
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-277974 -n newest-cni-277974: exit status 2 (247.438433ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-277974 -n newest-cni-277974
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-277974 -n newest-cni-277974: exit status 2 (246.48075ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-277974 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-277974 -n newest-cni-277974
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-277974 -n newest-cni-277974
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.61s)
E1019 17:29:37.026802  278280 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-274250/.minikube/profiles/default-k8s-diff-port-281360/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (98.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-162308 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
E1019 17:27:35.705061  278280 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-274250/.minikube/profiles/old-k8s-version-519622/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 17:27:35.711537  278280 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-274250/.minikube/profiles/old-k8s-version-519622/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 17:27:35.722857  278280 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-274250/.minikube/profiles/old-k8s-version-519622/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 17:27:35.744291  278280 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-274250/.minikube/profiles/old-k8s-version-519622/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 17:27:35.785741  278280 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-274250/.minikube/profiles/old-k8s-version-519622/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 17:27:35.867271  278280 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-274250/.minikube/profiles/old-k8s-version-519622/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 17:27:36.028848  278280 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-274250/.minikube/profiles/old-k8s-version-519622/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 17:27:36.350604  278280 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-274250/.minikube/profiles/old-k8s-version-519622/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 17:27:36.993015  278280 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-274250/.minikube/profiles/old-k8s-version-519622/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 17:27:38.274374  278280 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-274250/.minikube/profiles/old-k8s-version-519622/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 17:27:40.836220  278280 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-274250/.minikube/profiles/old-k8s-version-519622/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 17:27:45.957530  278280 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-274250/.minikube/profiles/old-k8s-version-519622/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-162308 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m38.019378326s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (98.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-162308 "pgrep -a kubelet"
I1019 17:27:51.212221  278280 config.go:182] Loaded profile config "auto-162308": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (12.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-162308 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-zgbtp" [afb13dad-fa91-4b5d-be61-955f1466441c] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1019 17:27:56.199551  278280 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-274250/.minikube/profiles/old-k8s-version-519622/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "netcat-cd4db9dbf-zgbtp" [afb13dad-fa91-4b5d-be61-955f1466441c] Running
E1019 17:28:00.133184  278280 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-274250/.minikube/profiles/no-preload-553278/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 17:28:00.139725  278280 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-274250/.minikube/profiles/no-preload-553278/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 17:28:00.151193  278280 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-274250/.minikube/profiles/no-preload-553278/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 17:28:00.172705  278280 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-274250/.minikube/profiles/no-preload-553278/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 17:28:00.214244  278280 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-274250/.minikube/profiles/no-preload-553278/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 17:28:00.295879  278280 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-274250/.minikube/profiles/no-preload-553278/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 17:28:00.457651  278280 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-274250/.minikube/profiles/no-preload-553278/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 17:28:00.779777  278280 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-274250/.minikube/profiles/no-preload-553278/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 17:28:01.421929  278280 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-274250/.minikube/profiles/no-preload-553278/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 17:28:02.703683  278280 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-274250/.minikube/profiles/no-preload-553278/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 12.005028193s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (12.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-162308 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-162308 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-162308 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:352: "kindnet-rdgbt" [e81735da-9e19-4ed7-8258-b8592ef07123] Running
E1019 17:28:10.387753  278280 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-274250/.minikube/profiles/no-preload-553278/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.004772541s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.51s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-162308 "pgrep -a kubelet"
I1019 17:28:15.043790  278280 config.go:182] Loaded profile config "kindnet-162308": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.51s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (11.82s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-162308 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-snhps" [29fca81f-1182-412c-a8d6-4dd11c0aebe4] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1019 17:28:16.681832  278280 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-274250/.minikube/profiles/old-k8s-version-519622/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 17:28:17.169236  278280 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-274250/.minikube/profiles/functional-244936/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1019 17:28:20.629157  278280 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-274250/.minikube/profiles/no-preload-553278/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "netcat-cd4db9dbf-snhps" [29fca81f-1182-412c-a8d6-4dd11c0aebe4] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 11.006352949s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (11.82s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (84.91s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-162308 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-162308 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m24.914633476s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (84.91s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-162308 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-162308 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-162308 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.03s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:352: "calico-node-47r8s" [3596c652-a02e-4952-a852-0b7ddcafa91d] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
E1019 17:28:41.111354  278280 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-274250/.minikube/profiles/no-preload-553278/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.006866394s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.03s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-162308 "pgrep -a kubelet"
I1019 17:28:44.070097  278280 config.go:182] Loaded profile config "calico-162308": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (10.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-162308 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-zsw8t" [0f9c2f18-2216-4b8f-9281-d5c3d53f4ff5] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-zsw8t" [0f9c2f18-2216-4b8f-9281-d5c3d53f4ff5] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 10.005617979s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (10.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (73.88s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-162308 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-162308 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m13.875495002s)
--- PASS: TestNetworkPlugins/group/flannel/Start (73.88s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-162308 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-162308 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-162308 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-162308 "pgrep -a kubelet"
I1019 17:29:02.785734  278280 config.go:182] Loaded profile config "custom-flannel-162308": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (12.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-162308 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-4h945" [806de507-cbb1-40e9-be6d-2b66f4699be9] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-4h945" [806de507-cbb1-40e9-be6d-2b66f4699be9] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 12.00407717s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (12.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (82.58s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-162308 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio --auto-update-drivers=false
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-162308 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio --auto-update-drivers=false: (1m22.581048492s)
--- PASS: TestNetworkPlugins/group/bridge/Start (82.58s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-162308 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-162308 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-162308 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-162308 "pgrep -a kubelet"
I1019 17:29:46.857438  278280 config.go:182] Loaded profile config "enable-default-cni-162308": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-162308 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-5zkxv" [4839fb8b-8fb1-4198-88ce-f14ae88207ce] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-5zkxv" [4839fb8b-8fb1-4198-88ce-f14ae88207ce] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 10.003489342s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-162308 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-162308 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-162308 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
E1019 17:29:57.508938  278280 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-274250/.minikube/profiles/default-k8s-diff-port-281360/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:352: "kube-flannel-ds-cvf25" [e002c3e6-a85e-483a-8052-998b7117266c] Running
E1019 17:29:58.888672  278280 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-274250/.minikube/profiles/addons-305823/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.003931271s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-162308 "pgrep -a kubelet"
I1019 17:30:04.943459  278280 config.go:182] Loaded profile config "flannel-162308": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (11.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-162308 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-zzcgg" [1194d4d9-64b0-424d-bd41-c84610be028e] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-zzcgg" [1194d4d9-64b0-424d-bd41-c84610be028e] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 11.008181633s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (11.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-162308 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-162308 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-162308 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-162308 "pgrep -a kubelet"
I1019 17:30:36.516330  278280 config.go:182] Loaded profile config "bridge-162308": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (10.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-162308 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-g22bq" [eac25b2d-a9c8-4a3a-8ace-6cb6ca1f5079] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1019 17:30:38.470877  278280 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-274250/.minikube/profiles/default-k8s-diff-port-281360/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "netcat-cd4db9dbf-g22bq" [eac25b2d-a9c8-4a3a-8ace-6cb6ca1f5079] Running
E1019 17:30:43.995095  278280 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21683-274250/.minikube/profiles/no-preload-553278/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 10.004215597s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (10.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-162308 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-162308 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-162308 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.12s)

                                                
                                    

Test skip (40/324)

Order skiped test Duration
5 TestDownloadOnly/v1.28.0/cached-images 0
6 TestDownloadOnly/v1.28.0/binaries 0
7 TestDownloadOnly/v1.28.0/kubectl 0
14 TestDownloadOnly/v1.34.1/cached-images 0
15 TestDownloadOnly/v1.34.1/binaries 0
16 TestDownloadOnly/v1.34.1/kubectl 0
20 TestDownloadOnlyKic 0
29 TestAddons/serial/Volcano 0.28
33 TestAddons/serial/GCPAuth/RealCredentials 0
40 TestAddons/parallel/Olm 0
47 TestAddons/parallel/AmdGpuDevicePlugin 0
51 TestDockerFlags 0
54 TestDockerEnvContainerd 0
56 TestHyperKitDriverInstallOrUpdate 0
57 TestHyperkitDriverSkipUpgrade 0
108 TestFunctional/parallel/DockerEnv 0
109 TestFunctional/parallel/PodmanEnv 0
146 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.01
147 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
148 TestFunctional/parallel/TunnelCmd/serial/WaitService 0.01
149 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
150 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
151 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
152 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
153 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.01
157 TestFunctionalNewestKubernetes 0
158 TestGvisorAddon 0
180 TestImageBuild 0
207 TestKicCustomNetwork 0
208 TestKicExistingNetwork 0
209 TestKicCustomSubnet 0
210 TestKicStaticIP 0
242 TestChangeNoneUser 0
245 TestScheduledStopWindows 0
247 TestSkaffold 0
249 TestInsufficientStorage 0
253 TestMissingContainerUpgrade 0
264 TestStartStop/group/disable-driver-mounts 0.14
269 TestNetworkPlugins/group/kubenet 3.12
277 TestNetworkPlugins/group/cilium 5.75
x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.34.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.34.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.34.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:219: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/serial/Volcano (0.28s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:850: skipping: crio not supported
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-305823 addons disable volcano --alsologtostderr -v=1
--- SKIP: TestAddons/serial/Volcano (0.28s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:759: This test requires a GCE instance (excluding Cloud Shell) with a container based driver
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:483: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (0s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:1033: skip amd gpu test on all but docker driver and amd64 platform
--- SKIP: TestAddons/parallel/AmdGpuDevicePlugin (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:114: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:178: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes
functional_test.go:82: 
--- SKIP: TestFunctionalNewestKubernetes (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.14s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-675740" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-675740
--- SKIP: TestStartStop/group/disable-driver-mounts (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:636: 
----------------------- debugLogs start: kubenet-162308 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-162308

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-162308

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-162308

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-162308

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-162308

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-162308

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-162308

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-162308

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-162308

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-162308

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-162308" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-162308"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-162308" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-162308"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-162308" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-162308"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-162308

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-162308" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-162308"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-162308" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-162308"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-162308" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-162308" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-162308" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-162308" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-162308" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-162308" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-162308" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-162308" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-162308" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-162308"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-162308" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-162308"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-162308" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-162308"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-162308" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-162308"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-162308" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-162308"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-162308" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-162308" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-162308" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-162308" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-162308"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-162308" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-162308"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-162308" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-162308"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-162308" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-162308"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-162308" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-162308"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21683-274250/.minikube/ca.crt
extensions:
- extension:
last-update: Sun, 19 Oct 2025 17:18:07 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.72.127:8443
name: cert-expiration-067580
contexts:
- context:
cluster: cert-expiration-067580
extensions:
- extension:
last-update: Sun, 19 Oct 2025 17:18:07 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: cert-expiration-067580
name: cert-expiration-067580
current-context: ""
kind: Config
users:
- name: cert-expiration-067580
user:
client-certificate: /home/jenkins/minikube-integration/21683-274250/.minikube/profiles/cert-expiration-067580/client.crt
client-key: /home/jenkins/minikube-integration/21683-274250/.minikube/profiles/cert-expiration-067580/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-162308

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-162308" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-162308"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-162308" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-162308"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-162308" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-162308"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-162308" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-162308"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-162308" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-162308"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-162308" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-162308"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-162308" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-162308"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-162308" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-162308"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-162308" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-162308"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-162308" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-162308"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-162308" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-162308"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-162308" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-162308"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-162308" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-162308"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-162308" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-162308"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-162308" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-162308"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-162308" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-162308"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-162308" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-162308"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-162308" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-162308"

                                                
                                                
----------------------- debugLogs end: kubenet-162308 [took: 2.959326755s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-162308" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-162308
--- SKIP: TestNetworkPlugins/group/kubenet (3.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (5.75s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:636: 
----------------------- debugLogs start: cilium-162308 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-162308

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-162308

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-162308

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-162308

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-162308

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-162308

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-162308

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-162308

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-162308

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-162308

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-162308" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-162308"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-162308" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-162308"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-162308" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-162308"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-162308

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-162308" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-162308"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-162308" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-162308"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-162308" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-162308" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-162308" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-162308" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-162308" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-162308" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-162308" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-162308" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-162308" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-162308"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-162308" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-162308"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-162308" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-162308"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-162308" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-162308"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-162308" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-162308"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-162308

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-162308

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-162308" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-162308" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-162308

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-162308

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-162308" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-162308" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-162308" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-162308" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-162308" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-162308" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-162308"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-162308" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-162308"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-162308" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-162308"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-162308" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-162308"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-162308" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-162308"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21683-274250/.minikube/ca.crt
extensions:
- extension:
last-update: Sun, 19 Oct 2025 17:18:07 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.72.127:8443
name: cert-expiration-067580
contexts:
- context:
cluster: cert-expiration-067580
extensions:
- extension:
last-update: Sun, 19 Oct 2025 17:18:07 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: cert-expiration-067580
name: cert-expiration-067580
current-context: ""
kind: Config
users:
- name: cert-expiration-067580
user:
client-certificate: /home/jenkins/minikube-integration/21683-274250/.minikube/profiles/cert-expiration-067580/client.crt
client-key: /home/jenkins/minikube-integration/21683-274250/.minikube/profiles/cert-expiration-067580/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-162308

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-162308" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-162308"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-162308" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-162308"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-162308" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-162308"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-162308" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-162308"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-162308" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-162308"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-162308" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-162308"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-162308" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-162308"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-162308" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-162308"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-162308" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-162308"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-162308" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-162308"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-162308" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-162308"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-162308" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-162308"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-162308" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-162308"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-162308" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-162308"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-162308" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-162308"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-162308" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-162308"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-162308" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-162308"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-162308" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-162308"

                                                
                                                
----------------------- debugLogs end: cilium-162308 [took: 5.585876342s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-162308" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-162308
--- SKIP: TestNetworkPlugins/group/cilium (5.75s)

                                                
                                    
Copied to clipboard