=== RUN TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress
=== CONT TestAddons/parallel/Ingress
addons_test.go:207: (dbg) Run: kubectl --context addons-415393 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:232: (dbg) Run: kubectl --context addons-415393 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:245: (dbg) Run: kubectl --context addons-415393 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [61de065f-ddd0-4b74-9082-0b8df43235d4] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [61de065f-ddd0-4b74-9082-0b8df43235d4] Running
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 13.012546416s
I0317 10:28:58.467594 12441 kapi.go:150] Service nginx in namespace default found.
addons_test.go:262: (dbg) Run: out/minikube-linux-amd64 -p addons-415393 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-415393 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m10.380180947s)
** stderr **
ssh: Process exited with status 28
** /stderr **
addons_test.go:278: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:286: (dbg) Run: kubectl --context addons-415393 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:291: (dbg) Run: out/minikube-linux-amd64 -p addons-415393 ip
addons_test.go:297: (dbg) Run: nslookup hello-john.test 192.168.39.132
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run: out/minikube-linux-amd64 status --format={{.Host}} -p addons-415393 -n addons-415393
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======> post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run: out/minikube-linux-amd64 -p addons-415393 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-415393 logs -n 25: (1.237719763s)
helpers_test.go:252: TestAddons/parallel/Ingress logs:
-- stdout --
==> Audit <==
|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
| Command | Args | Profile | User | Version | Start Time | End Time |
|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
| delete | -p download-only-998296 | download-only-998296 | jenkins | v1.35.0 | 17 Mar 25 10:25 UTC | 17 Mar 25 10:25 UTC |
| delete | -p download-only-563764 | download-only-563764 | jenkins | v1.35.0 | 17 Mar 25 10:25 UTC | 17 Mar 25 10:25 UTC |
| delete | -p download-only-998296 | download-only-998296 | jenkins | v1.35.0 | 17 Mar 25 10:25 UTC | 17 Mar 25 10:25 UTC |
| start | --download-only -p | binary-mirror-402776 | jenkins | v1.35.0 | 17 Mar 25 10:25 UTC | |
| | binary-mirror-402776 | | | | | |
| | --alsologtostderr | | | | | |
| | --binary-mirror | | | | | |
| | http://127.0.0.1:43215 | | | | | |
| | --driver=kvm2 | | | | | |
| | --container-runtime=crio | | | | | |
| delete | -p binary-mirror-402776 | binary-mirror-402776 | jenkins | v1.35.0 | 17 Mar 25 10:25 UTC | 17 Mar 25 10:25 UTC |
| addons | enable dashboard -p | addons-415393 | jenkins | v1.35.0 | 17 Mar 25 10:25 UTC | |
| | addons-415393 | | | | | |
| addons | disable dashboard -p | addons-415393 | jenkins | v1.35.0 | 17 Mar 25 10:25 UTC | |
| | addons-415393 | | | | | |
| start | -p addons-415393 --wait=true | addons-415393 | jenkins | v1.35.0 | 17 Mar 25 10:26 UTC | 17 Mar 25 10:28 UTC |
| | --memory=4000 --alsologtostderr | | | | | |
| | --addons=registry | | | | | |
| | --addons=metrics-server | | | | | |
| | --addons=volumesnapshots | | | | | |
| | --addons=csi-hostpath-driver | | | | | |
| | --addons=gcp-auth | | | | | |
| | --addons=cloud-spanner | | | | | |
| | --addons=inspektor-gadget | | | | | |
| | --addons=nvidia-device-plugin | | | | | |
| | --addons=yakd --addons=volcano | | | | | |
| | --addons=amd-gpu-device-plugin | | | | | |
| | --driver=kvm2 | | | | | |
| | --container-runtime=crio | | | | | |
| | --addons=ingress | | | | | |
| | --addons=ingress-dns | | | | | |
| | --addons=storage-provisioner-rancher | | | | | |
| addons | addons-415393 addons disable | addons-415393 | jenkins | v1.35.0 | 17 Mar 25 10:28 UTC | 17 Mar 25 10:28 UTC |
| | volcano --alsologtostderr -v=1 | | | | | |
| addons | addons-415393 addons disable | addons-415393 | jenkins | v1.35.0 | 17 Mar 25 10:28 UTC | 17 Mar 25 10:28 UTC |
| | gcp-auth --alsologtostderr | | | | | |
| | -v=1 | | | | | |
| addons | enable headlamp | addons-415393 | jenkins | v1.35.0 | 17 Mar 25 10:28 UTC | 17 Mar 25 10:28 UTC |
| | -p addons-415393 | | | | | |
| | --alsologtostderr -v=1 | | | | | |
| addons | addons-415393 addons | addons-415393 | jenkins | v1.35.0 | 17 Mar 25 10:28 UTC | 17 Mar 25 10:28 UTC |
| | disable inspektor-gadget | | | | | |
| | --alsologtostderr -v=1 | | | | | |
| addons | addons-415393 addons | addons-415393 | jenkins | v1.35.0 | 17 Mar 25 10:28 UTC | 17 Mar 25 10:28 UTC |
| | disable metrics-server | | | | | |
| | --alsologtostderr -v=1 | | | | | |
| addons | addons-415393 addons disable | addons-415393 | jenkins | v1.35.0 | 17 Mar 25 10:28 UTC | 17 Mar 25 10:28 UTC |
| | headlamp --alsologtostderr | | | | | |
| | -v=1 | | | | | |
| ip | addons-415393 ip | addons-415393 | jenkins | v1.35.0 | 17 Mar 25 10:28 UTC | 17 Mar 25 10:28 UTC |
| addons | addons-415393 addons disable | addons-415393 | jenkins | v1.35.0 | 17 Mar 25 10:28 UTC | 17 Mar 25 10:28 UTC |
| | registry --alsologtostderr | | | | | |
| | -v=1 | | | | | |
| addons | addons-415393 addons | addons-415393 | jenkins | v1.35.0 | 17 Mar 25 10:28 UTC | 17 Mar 25 10:28 UTC |
| | disable nvidia-device-plugin | | | | | |
| | --alsologtostderr -v=1 | | | | | |
| ssh | addons-415393 ssh curl -s | addons-415393 | jenkins | v1.35.0 | 17 Mar 25 10:28 UTC | |
| | http://127.0.0.1/ -H 'Host: | | | | | |
| | nginx.example.com' | | | | | |
| addons | addons-415393 addons disable | addons-415393 | jenkins | v1.35.0 | 17 Mar 25 10:28 UTC | 17 Mar 25 10:29 UTC |
| | yakd --alsologtostderr -v=1 | | | | | |
| ssh | addons-415393 ssh cat | addons-415393 | jenkins | v1.35.0 | 17 Mar 25 10:29 UTC | 17 Mar 25 10:29 UTC |
| | /opt/local-path-provisioner/pvc-87d02825-bd2d-4d54-846a-d15a71b433ca_default_test-pvc/file1 | | | | | |
| addons | addons-415393 addons disable | addons-415393 | jenkins | v1.35.0 | 17 Mar 25 10:29 UTC | 17 Mar 25 10:29 UTC |
| | storage-provisioner-rancher | | | | | |
| | --alsologtostderr -v=1 | | | | | |
| addons | addons-415393 addons | addons-415393 | jenkins | v1.35.0 | 17 Mar 25 10:29 UTC | 17 Mar 25 10:29 UTC |
| | disable cloud-spanner | | | | | |
| | --alsologtostderr -v=1 | | | | | |
| addons | addons-415393 addons | addons-415393 | jenkins | v1.35.0 | 17 Mar 25 10:29 UTC | 17 Mar 25 10:29 UTC |
| | disable volumesnapshots | | | | | |
| | --alsologtostderr -v=1 | | | | | |
| addons | addons-415393 addons | addons-415393 | jenkins | v1.35.0 | 17 Mar 25 10:29 UTC | 17 Mar 25 10:29 UTC |
| | disable csi-hostpath-driver | | | | | |
| | --alsologtostderr -v=1 | | | | | |
| ip | addons-415393 ip | addons-415393 | jenkins | v1.35.0 | 17 Mar 25 10:31 UTC | 17 Mar 25 10:31 UTC |
|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
==> Last Start <==
Log file created at: 2025/03/17 10:26:00
Running on machine: ubuntu-20-agent-8
Binary: Built with gc go1.24.0 for linux/amd64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I0317 10:26:00.004249 13120 out.go:345] Setting OutFile to fd 1 ...
I0317 10:26:00.004482 13120 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0317 10:26:00.004491 13120 out.go:358] Setting ErrFile to fd 2...
I0317 10:26:00.004495 13120 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0317 10:26:00.004685 13120 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20535-5255/.minikube/bin
I0317 10:26:00.005303 13120 out.go:352] Setting JSON to false
I0317 10:26:00.006086 13120 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":500,"bootTime":1742206660,"procs":171,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1078-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
I0317 10:26:00.006180 13120 start.go:139] virtualization: kvm guest
I0317 10:26:00.008079 13120 out.go:177] * [addons-415393] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
I0317 10:26:00.009544 13120 out.go:177] - MINIKUBE_LOCATION=20535
I0317 10:26:00.009586 13120 notify.go:220] Checking for updates...
I0317 10:26:00.012025 13120 out.go:177] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I0317 10:26:00.013198 13120 out.go:177] - KUBECONFIG=/home/jenkins/minikube-integration/20535-5255/kubeconfig
I0317 10:26:00.014331 13120 out.go:177] - MINIKUBE_HOME=/home/jenkins/minikube-integration/20535-5255/.minikube
I0317 10:26:00.015514 13120 out.go:177] - MINIKUBE_BIN=out/minikube-linux-amd64
I0317 10:26:00.016617 13120 out.go:177] - MINIKUBE_FORCE_SYSTEMD=
I0317 10:26:00.017909 13120 driver.go:394] Setting default libvirt URI to qemu:///system
I0317 10:26:00.052499 13120 out.go:177] * Using the kvm2 driver based on user configuration
I0317 10:26:00.053758 13120 start.go:297] selected driver: kvm2
I0317 10:26:00.053776 13120 start.go:901] validating driver "kvm2" against <nil>
I0317 10:26:00.053791 13120 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0317 10:26:00.054469 13120 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0317 10:26:00.054568 13120 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20535-5255/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
I0317 10:26:00.070122 13120 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
I0317 10:26:00.070174 13120 start_flags.go:310] no existing cluster config was found, will generate one from the flags
I0317 10:26:00.070423 13120 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I0317 10:26:00.070459 13120 cni.go:84] Creating CNI manager for ""
I0317 10:26:00.070512 13120 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
I0317 10:26:00.070523 13120 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
I0317 10:26:00.070587 13120 start.go:340] cluster config:
{Name:addons-415393 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:addons-415393 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAg
entPID:0 GPUs: AutoPauseInterval:1m0s}
I0317 10:26:00.070691 13120 iso.go:125] acquiring lock: {Name:mk92ff0b84566c7dc2e46765e6de0666fbe86f4c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0317 10:26:00.072407 13120 out.go:177] * Starting "addons-415393" primary control-plane node in "addons-415393" cluster
I0317 10:26:00.073642 13120 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime crio
I0317 10:26:00.073692 13120 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20535-5255/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4
I0317 10:26:00.073701 13120 cache.go:56] Caching tarball of preloaded images
I0317 10:26:00.073780 13120 preload.go:172] Found /home/jenkins/minikube-integration/20535-5255/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
I0317 10:26:00.073790 13120 cache.go:59] Finished verifying existence of preloaded tar for v1.32.2 on crio
I0317 10:26:00.074086 13120 profile.go:143] Saving config to /home/jenkins/minikube-integration/20535-5255/.minikube/profiles/addons-415393/config.json ...
I0317 10:26:00.074108 13120 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20535-5255/.minikube/profiles/addons-415393/config.json: {Name:mk99097158db033ee1ba7e4025ecfd5e7c436ff8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0317 10:26:00.074241 13120 start.go:360] acquireMachinesLock for addons-415393: {Name:mk252f63cc10a1b0b2b9a0530d90cb5042c66959 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0317 10:26:00.074283 13120 start.go:364] duration metric: took 29.494µs to acquireMachinesLock for "addons-415393"
I0317 10:26:00.074311 13120 start.go:93] Provisioning new machine with config: &{Name:addons-415393 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterNa
me:addons-415393 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bi
naryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}
I0317 10:26:00.074358 13120 start.go:125] createHost starting for "" (driver="kvm2")
I0317 10:26:00.076519 13120 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
I0317 10:26:00.076687 13120 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0317 10:26:00.076745 13120 main.go:141] libmachine: Launching plugin server for driver kvm2
I0317 10:26:00.091487 13120 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43935
I0317 10:26:00.092010 13120 main.go:141] libmachine: () Calling .GetVersion
I0317 10:26:00.092590 13120 main.go:141] libmachine: Using API Version 1
I0317 10:26:00.092614 13120 main.go:141] libmachine: () Calling .SetConfigRaw
I0317 10:26:00.093040 13120 main.go:141] libmachine: () Calling .GetMachineName
I0317 10:26:00.093237 13120 main.go:141] libmachine: (addons-415393) Calling .GetMachineName
I0317 10:26:00.093374 13120 main.go:141] libmachine: (addons-415393) Calling .DriverName
I0317 10:26:00.093533 13120 start.go:159] libmachine.API.Create for "addons-415393" (driver="kvm2")
I0317 10:26:00.093578 13120 client.go:168] LocalClient.Create starting
I0317 10:26:00.093623 13120 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/20535-5255/.minikube/certs/ca.pem
I0317 10:26:00.212912 13120 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/20535-5255/.minikube/certs/cert.pem
I0317 10:26:00.461297 13120 main.go:141] libmachine: Running pre-create checks...
I0317 10:26:00.461322 13120 main.go:141] libmachine: (addons-415393) Calling .PreCreateCheck
I0317 10:26:00.461802 13120 main.go:141] libmachine: (addons-415393) Calling .GetConfigRaw
I0317 10:26:00.462287 13120 main.go:141] libmachine: Creating machine...
I0317 10:26:00.462300 13120 main.go:141] libmachine: (addons-415393) Calling .Create
I0317 10:26:00.462490 13120 main.go:141] libmachine: (addons-415393) creating KVM machine...
I0317 10:26:00.462508 13120 main.go:141] libmachine: (addons-415393) creating network...
I0317 10:26:00.463612 13120 main.go:141] libmachine: (addons-415393) DBG | found existing default KVM network
I0317 10:26:00.464261 13120 main.go:141] libmachine: (addons-415393) DBG | I0317 10:26:00.464118 13142 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000200dd0}
I0317 10:26:00.464301 13120 main.go:141] libmachine: (addons-415393) DBG | created network xml:
I0317 10:26:00.464318 13120 main.go:141] libmachine: (addons-415393) DBG | <network>
I0317 10:26:00.464338 13120 main.go:141] libmachine: (addons-415393) DBG | <name>mk-addons-415393</name>
I0317 10:26:00.464351 13120 main.go:141] libmachine: (addons-415393) DBG | <dns enable='no'/>
I0317 10:26:00.464374 13120 main.go:141] libmachine: (addons-415393) DBG |
I0317 10:26:00.464396 13120 main.go:141] libmachine: (addons-415393) DBG | <ip address='192.168.39.1' netmask='255.255.255.0'>
I0317 10:26:00.464410 13120 main.go:141] libmachine: (addons-415393) DBG | <dhcp>
I0317 10:26:00.464420 13120 main.go:141] libmachine: (addons-415393) DBG | <range start='192.168.39.2' end='192.168.39.253'/>
I0317 10:26:00.464426 13120 main.go:141] libmachine: (addons-415393) DBG | </dhcp>
I0317 10:26:00.464432 13120 main.go:141] libmachine: (addons-415393) DBG | </ip>
I0317 10:26:00.464436 13120 main.go:141] libmachine: (addons-415393) DBG |
I0317 10:26:00.464442 13120 main.go:141] libmachine: (addons-415393) DBG | </network>
I0317 10:26:00.464466 13120 main.go:141] libmachine: (addons-415393) DBG |
I0317 10:26:00.469669 13120 main.go:141] libmachine: (addons-415393) DBG | trying to create private KVM network mk-addons-415393 192.168.39.0/24...
I0317 10:26:00.538814 13120 main.go:141] libmachine: (addons-415393) DBG | private KVM network mk-addons-415393 192.168.39.0/24 created
I0317 10:26:00.538845 13120 main.go:141] libmachine: (addons-415393) setting up store path in /home/jenkins/minikube-integration/20535-5255/.minikube/machines/addons-415393 ...
I0317 10:26:00.538857 13120 main.go:141] libmachine: (addons-415393) building disk image from file:///home/jenkins/minikube-integration/20535-5255/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso
I0317 10:26:00.538869 13120 main.go:141] libmachine: (addons-415393) DBG | I0317 10:26:00.538782 13142 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/20535-5255/.minikube
I0317 10:26:00.539028 13120 main.go:141] libmachine: (addons-415393) Downloading /home/jenkins/minikube-integration/20535-5255/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20535-5255/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso...
I0317 10:26:00.810962 13120 main.go:141] libmachine: (addons-415393) DBG | I0317 10:26:00.810821 13142 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/20535-5255/.minikube/machines/addons-415393/id_rsa...
I0317 10:26:01.103431 13120 main.go:141] libmachine: (addons-415393) DBG | I0317 10:26:01.103277 13142 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/20535-5255/.minikube/machines/addons-415393/addons-415393.rawdisk...
I0317 10:26:01.103470 13120 main.go:141] libmachine: (addons-415393) DBG | Writing magic tar header
I0317 10:26:01.103489 13120 main.go:141] libmachine: (addons-415393) DBG | Writing SSH key tar header
I0317 10:26:01.103505 13120 main.go:141] libmachine: (addons-415393) DBG | I0317 10:26:01.103399 13142 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/20535-5255/.minikube/machines/addons-415393 ...
I0317 10:26:01.103521 13120 main.go:141] libmachine: (addons-415393) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20535-5255/.minikube/machines/addons-415393
I0317 10:26:01.103539 13120 main.go:141] libmachine: (addons-415393) setting executable bit set on /home/jenkins/minikube-integration/20535-5255/.minikube/machines/addons-415393 (perms=drwx------)
I0317 10:26:01.103549 13120 main.go:141] libmachine: (addons-415393) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20535-5255/.minikube/machines
I0317 10:26:01.103556 13120 main.go:141] libmachine: (addons-415393) setting executable bit set on /home/jenkins/minikube-integration/20535-5255/.minikube/machines (perms=drwxr-xr-x)
I0317 10:26:01.103574 13120 main.go:141] libmachine: (addons-415393) setting executable bit set on /home/jenkins/minikube-integration/20535-5255/.minikube (perms=drwxr-xr-x)
I0317 10:26:01.103583 13120 main.go:141] libmachine: (addons-415393) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20535-5255/.minikube
I0317 10:26:01.103593 13120 main.go:141] libmachine: (addons-415393) setting executable bit set on /home/jenkins/minikube-integration/20535-5255 (perms=drwxrwxr-x)
I0317 10:26:01.103608 13120 main.go:141] libmachine: (addons-415393) setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
I0317 10:26:01.103616 13120 main.go:141] libmachine: (addons-415393) setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
I0317 10:26:01.103623 13120 main.go:141] libmachine: (addons-415393) creating domain...
I0317 10:26:01.103629 13120 main.go:141] libmachine: (addons-415393) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20535-5255
I0317 10:26:01.103637 13120 main.go:141] libmachine: (addons-415393) DBG | checking permissions on dir: /home/jenkins/minikube-integration
I0317 10:26:01.103650 13120 main.go:141] libmachine: (addons-415393) DBG | checking permissions on dir: /home/jenkins
I0317 10:26:01.103658 13120 main.go:141] libmachine: (addons-415393) DBG | checking permissions on dir: /home
I0317 10:26:01.103668 13120 main.go:141] libmachine: (addons-415393) DBG | skipping /home - not owner
I0317 10:26:01.104834 13120 main.go:141] libmachine: (addons-415393) define libvirt domain using xml:
I0317 10:26:01.104857 13120 main.go:141] libmachine: (addons-415393) <domain type='kvm'>
I0317 10:26:01.104866 13120 main.go:141] libmachine: (addons-415393) <name>addons-415393</name>
I0317 10:26:01.104879 13120 main.go:141] libmachine: (addons-415393) <memory unit='MiB'>4000</memory>
I0317 10:26:01.104887 13120 main.go:141] libmachine: (addons-415393) <vcpu>2</vcpu>
I0317 10:26:01.104893 13120 main.go:141] libmachine: (addons-415393) <features>
I0317 10:26:01.104898 13120 main.go:141] libmachine: (addons-415393) <acpi/>
I0317 10:26:01.104902 13120 main.go:141] libmachine: (addons-415393) <apic/>
I0317 10:26:01.104906 13120 main.go:141] libmachine: (addons-415393) <pae/>
I0317 10:26:01.104913 13120 main.go:141] libmachine: (addons-415393)
I0317 10:26:01.104918 13120 main.go:141] libmachine: (addons-415393) </features>
I0317 10:26:01.104923 13120 main.go:141] libmachine: (addons-415393) <cpu mode='host-passthrough'>
I0317 10:26:01.104928 13120 main.go:141] libmachine: (addons-415393)
I0317 10:26:01.104932 13120 main.go:141] libmachine: (addons-415393) </cpu>
I0317 10:26:01.104940 13120 main.go:141] libmachine: (addons-415393) <os>
I0317 10:26:01.104947 13120 main.go:141] libmachine: (addons-415393) <type>hvm</type>
I0317 10:26:01.104982 13120 main.go:141] libmachine: (addons-415393) <boot dev='cdrom'/>
I0317 10:26:01.105005 13120 main.go:141] libmachine: (addons-415393) <boot dev='hd'/>
I0317 10:26:01.105016 13120 main.go:141] libmachine: (addons-415393) <bootmenu enable='no'/>
I0317 10:26:01.105030 13120 main.go:141] libmachine: (addons-415393) </os>
I0317 10:26:01.105049 13120 main.go:141] libmachine: (addons-415393) <devices>
I0317 10:26:01.105067 13120 main.go:141] libmachine: (addons-415393) <disk type='file' device='cdrom'>
I0317 10:26:01.105082 13120 main.go:141] libmachine: (addons-415393) <source file='/home/jenkins/minikube-integration/20535-5255/.minikube/machines/addons-415393/boot2docker.iso'/>
I0317 10:26:01.105092 13120 main.go:141] libmachine: (addons-415393) <target dev='hdc' bus='scsi'/>
I0317 10:26:01.105100 13120 main.go:141] libmachine: (addons-415393) <readonly/>
I0317 10:26:01.105104 13120 main.go:141] libmachine: (addons-415393) </disk>
I0317 10:26:01.105137 13120 main.go:141] libmachine: (addons-415393) <disk type='file' device='disk'>
I0317 10:26:01.105157 13120 main.go:141] libmachine: (addons-415393) <driver name='qemu' type='raw' cache='default' io='threads' />
I0317 10:26:01.105169 13120 main.go:141] libmachine: (addons-415393) <source file='/home/jenkins/minikube-integration/20535-5255/.minikube/machines/addons-415393/addons-415393.rawdisk'/>
I0317 10:26:01.105176 13120 main.go:141] libmachine: (addons-415393) <target dev='hda' bus='virtio'/>
I0317 10:26:01.105181 13120 main.go:141] libmachine: (addons-415393) </disk>
I0317 10:26:01.105188 13120 main.go:141] libmachine: (addons-415393) <interface type='network'>
I0317 10:26:01.105194 13120 main.go:141] libmachine: (addons-415393) <source network='mk-addons-415393'/>
I0317 10:26:01.105200 13120 main.go:141] libmachine: (addons-415393) <model type='virtio'/>
I0317 10:26:01.105205 13120 main.go:141] libmachine: (addons-415393) </interface>
I0317 10:26:01.105212 13120 main.go:141] libmachine: (addons-415393) <interface type='network'>
I0317 10:26:01.105226 13120 main.go:141] libmachine: (addons-415393) <source network='default'/>
I0317 10:26:01.105244 13120 main.go:141] libmachine: (addons-415393) <model type='virtio'/>
I0317 10:26:01.105252 13120 main.go:141] libmachine: (addons-415393) </interface>
I0317 10:26:01.105256 13120 main.go:141] libmachine: (addons-415393) <serial type='pty'>
I0317 10:26:01.105263 13120 main.go:141] libmachine: (addons-415393) <target port='0'/>
I0317 10:26:01.105267 13120 main.go:141] libmachine: (addons-415393) </serial>
I0317 10:26:01.105275 13120 main.go:141] libmachine: (addons-415393) <console type='pty'>
I0317 10:26:01.105281 13120 main.go:141] libmachine: (addons-415393) <target type='serial' port='0'/>
I0317 10:26:01.105288 13120 main.go:141] libmachine: (addons-415393) </console>
I0317 10:26:01.105292 13120 main.go:141] libmachine: (addons-415393) <rng model='virtio'>
I0317 10:26:01.105298 13120 main.go:141] libmachine: (addons-415393) <backend model='random'>/dev/random</backend>
I0317 10:26:01.105307 13120 main.go:141] libmachine: (addons-415393) </rng>
I0317 10:26:01.105323 13120 main.go:141] libmachine: (addons-415393)
I0317 10:26:01.105334 13120 main.go:141] libmachine: (addons-415393)
I0317 10:26:01.105345 13120 main.go:141] libmachine: (addons-415393) </devices>
I0317 10:26:01.105352 13120 main.go:141] libmachine: (addons-415393) </domain>
I0317 10:26:01.105360 13120 main.go:141] libmachine: (addons-415393)
I0317 10:26:01.111296 13120 main.go:141] libmachine: (addons-415393) DBG | domain addons-415393 has defined MAC address 52:54:00:ef:43:33 in network default
I0317 10:26:01.111869 13120 main.go:141] libmachine: (addons-415393) starting domain...
I0317 10:26:01.111892 13120 main.go:141] libmachine: (addons-415393) ensuring networks are active...
I0317 10:26:01.111903 13120 main.go:141] libmachine: (addons-415393) DBG | domain addons-415393 has defined MAC address 52:54:00:e8:ea:9e in network mk-addons-415393
I0317 10:26:01.112467 13120 main.go:141] libmachine: (addons-415393) Ensuring network default is active
I0317 10:26:01.112837 13120 main.go:141] libmachine: (addons-415393) Ensuring network mk-addons-415393 is active
I0317 10:26:01.113298 13120 main.go:141] libmachine: (addons-415393) getting domain XML...
I0317 10:26:01.113874 13120 main.go:141] libmachine: (addons-415393) creating domain...
I0317 10:26:02.505876 13120 main.go:141] libmachine: (addons-415393) waiting for IP...
I0317 10:26:02.506554 13120 main.go:141] libmachine: (addons-415393) DBG | domain addons-415393 has defined MAC address 52:54:00:e8:ea:9e in network mk-addons-415393
I0317 10:26:02.506939 13120 main.go:141] libmachine: (addons-415393) DBG | unable to find current IP address of domain addons-415393 in network mk-addons-415393
I0317 10:26:02.507010 13120 main.go:141] libmachine: (addons-415393) DBG | I0317 10:26:02.506952 13142 retry.go:31] will retry after 275.291122ms: waiting for domain to come up
I0317 10:26:02.783385 13120 main.go:141] libmachine: (addons-415393) DBG | domain addons-415393 has defined MAC address 52:54:00:e8:ea:9e in network mk-addons-415393
I0317 10:26:02.783834 13120 main.go:141] libmachine: (addons-415393) DBG | unable to find current IP address of domain addons-415393 in network mk-addons-415393
I0317 10:26:02.783850 13120 main.go:141] libmachine: (addons-415393) DBG | I0317 10:26:02.783816 13142 retry.go:31] will retry after 303.619453ms: waiting for domain to come up
I0317 10:26:03.089279 13120 main.go:141] libmachine: (addons-415393) DBG | domain addons-415393 has defined MAC address 52:54:00:e8:ea:9e in network mk-addons-415393
I0317 10:26:03.089636 13120 main.go:141] libmachine: (addons-415393) DBG | unable to find current IP address of domain addons-415393 in network mk-addons-415393
I0317 10:26:03.089665 13120 main.go:141] libmachine: (addons-415393) DBG | I0317 10:26:03.089605 13142 retry.go:31] will retry after 469.553851ms: waiting for domain to come up
I0317 10:26:03.561380 13120 main.go:141] libmachine: (addons-415393) DBG | domain addons-415393 has defined MAC address 52:54:00:e8:ea:9e in network mk-addons-415393
I0317 10:26:03.561805 13120 main.go:141] libmachine: (addons-415393) DBG | unable to find current IP address of domain addons-415393 in network mk-addons-415393
I0317 10:26:03.561839 13120 main.go:141] libmachine: (addons-415393) DBG | I0317 10:26:03.561774 13142 retry.go:31] will retry after 373.56523ms: waiting for domain to come up
I0317 10:26:03.937338 13120 main.go:141] libmachine: (addons-415393) DBG | domain addons-415393 has defined MAC address 52:54:00:e8:ea:9e in network mk-addons-415393
I0317 10:26:03.937935 13120 main.go:141] libmachine: (addons-415393) DBG | unable to find current IP address of domain addons-415393 in network mk-addons-415393
I0317 10:26:03.937971 13120 main.go:141] libmachine: (addons-415393) DBG | I0317 10:26:03.937922 13142 retry.go:31] will retry after 584.485492ms: waiting for domain to come up
I0317 10:26:04.523583 13120 main.go:141] libmachine: (addons-415393) DBG | domain addons-415393 has defined MAC address 52:54:00:e8:ea:9e in network mk-addons-415393
I0317 10:26:04.523982 13120 main.go:141] libmachine: (addons-415393) DBG | unable to find current IP address of domain addons-415393 in network mk-addons-415393
I0317 10:26:04.524025 13120 main.go:141] libmachine: (addons-415393) DBG | I0317 10:26:04.523946 13142 retry.go:31] will retry after 862.940657ms: waiting for domain to come up
I0317 10:26:05.388292 13120 main.go:141] libmachine: (addons-415393) DBG | domain addons-415393 has defined MAC address 52:54:00:e8:ea:9e in network mk-addons-415393
I0317 10:26:05.388779 13120 main.go:141] libmachine: (addons-415393) DBG | unable to find current IP address of domain addons-415393 in network mk-addons-415393
I0317 10:26:05.388804 13120 main.go:141] libmachine: (addons-415393) DBG | I0317 10:26:05.388743 13142 retry.go:31] will retry after 997.655882ms: waiting for domain to come up
I0317 10:26:06.387818 13120 main.go:141] libmachine: (addons-415393) DBG | domain addons-415393 has defined MAC address 52:54:00:e8:ea:9e in network mk-addons-415393
I0317 10:26:06.388244 13120 main.go:141] libmachine: (addons-415393) DBG | unable to find current IP address of domain addons-415393 in network mk-addons-415393
I0317 10:26:06.388272 13120 main.go:141] libmachine: (addons-415393) DBG | I0317 10:26:06.388202 13142 retry.go:31] will retry after 1.45649458s: waiting for domain to come up
I0317 10:26:07.847037 13120 main.go:141] libmachine: (addons-415393) DBG | domain addons-415393 has defined MAC address 52:54:00:e8:ea:9e in network mk-addons-415393
I0317 10:26:07.847359 13120 main.go:141] libmachine: (addons-415393) DBG | unable to find current IP address of domain addons-415393 in network mk-addons-415393
I0317 10:26:07.847396 13120 main.go:141] libmachine: (addons-415393) DBG | I0317 10:26:07.847347 13142 retry.go:31] will retry after 1.612897917s: waiting for domain to come up
I0317 10:26:09.462053 13120 main.go:141] libmachine: (addons-415393) DBG | domain addons-415393 has defined MAC address 52:54:00:e8:ea:9e in network mk-addons-415393
I0317 10:26:09.462589 13120 main.go:141] libmachine: (addons-415393) DBG | unable to find current IP address of domain addons-415393 in network mk-addons-415393
I0317 10:26:09.462612 13120 main.go:141] libmachine: (addons-415393) DBG | I0317 10:26:09.462543 13142 retry.go:31] will retry after 2.053852489s: waiting for domain to come up
I0317 10:26:11.518709 13120 main.go:141] libmachine: (addons-415393) DBG | domain addons-415393 has defined MAC address 52:54:00:e8:ea:9e in network mk-addons-415393
I0317 10:26:11.519200 13120 main.go:141] libmachine: (addons-415393) DBG | unable to find current IP address of domain addons-415393 in network mk-addons-415393
I0317 10:26:11.519226 13120 main.go:141] libmachine: (addons-415393) DBG | I0317 10:26:11.519175 13142 retry.go:31] will retry after 1.945179412s: waiting for domain to come up
I0317 10:26:13.467150 13120 main.go:141] libmachine: (addons-415393) DBG | domain addons-415393 has defined MAC address 52:54:00:e8:ea:9e in network mk-addons-415393
I0317 10:26:13.467447 13120 main.go:141] libmachine: (addons-415393) DBG | unable to find current IP address of domain addons-415393 in network mk-addons-415393
I0317 10:26:13.467487 13120 main.go:141] libmachine: (addons-415393) DBG | I0317 10:26:13.467385 13142 retry.go:31] will retry after 2.75201659s: waiting for domain to come up
I0317 10:26:16.222387 13120 main.go:141] libmachine: (addons-415393) DBG | domain addons-415393 has defined MAC address 52:54:00:e8:ea:9e in network mk-addons-415393
I0317 10:26:16.222813 13120 main.go:141] libmachine: (addons-415393) DBG | unable to find current IP address of domain addons-415393 in network mk-addons-415393
I0317 10:26:16.222875 13120 main.go:141] libmachine: (addons-415393) DBG | I0317 10:26:16.222807 13142 retry.go:31] will retry after 3.053235389s: waiting for domain to come up
I0317 10:26:19.277367 13120 main.go:141] libmachine: (addons-415393) DBG | domain addons-415393 has defined MAC address 52:54:00:e8:ea:9e in network mk-addons-415393
I0317 10:26:19.277827 13120 main.go:141] libmachine: (addons-415393) DBG | unable to find current IP address of domain addons-415393 in network mk-addons-415393
I0317 10:26:19.277848 13120 main.go:141] libmachine: (addons-415393) DBG | I0317 10:26:19.277794 13142 retry.go:31] will retry after 4.128559716s: waiting for domain to come up
I0317 10:26:23.410514 13120 main.go:141] libmachine: (addons-415393) DBG | domain addons-415393 has defined MAC address 52:54:00:e8:ea:9e in network mk-addons-415393
I0317 10:26:23.410963 13120 main.go:141] libmachine: (addons-415393) DBG | domain addons-415393 has current primary IP address 192.168.39.132 and MAC address 52:54:00:e8:ea:9e in network mk-addons-415393
I0317 10:26:23.410984 13120 main.go:141] libmachine: (addons-415393) found domain IP: 192.168.39.132
I0317 10:26:23.411013 13120 main.go:141] libmachine: (addons-415393) reserving static IP address...
I0317 10:26:23.411326 13120 main.go:141] libmachine: (addons-415393) DBG | unable to find host DHCP lease matching {name: "addons-415393", mac: "52:54:00:e8:ea:9e", ip: "192.168.39.132"} in network mk-addons-415393
I0317 10:26:23.484672 13120 main.go:141] libmachine: (addons-415393) DBG | Getting to WaitForSSH function...
I0317 10:26:23.484704 13120 main.go:141] libmachine: (addons-415393) reserved static IP address 192.168.39.132 for domain addons-415393
I0317 10:26:23.484733 13120 main.go:141] libmachine: (addons-415393) waiting for SSH...
I0317 10:26:23.487052 13120 main.go:141] libmachine: (addons-415393) DBG | domain addons-415393 has defined MAC address 52:54:00:e8:ea:9e in network mk-addons-415393
I0317 10:26:23.487438 13120 main.go:141] libmachine: (addons-415393) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:ea:9e", ip: ""} in network mk-addons-415393: {Iface:virbr1 ExpiryTime:2025-03-17 11:26:15 +0000 UTC Type:0 Mac:52:54:00:e8:ea:9e Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:minikube Clientid:01:52:54:00:e8:ea:9e}
I0317 10:26:23.487461 13120 main.go:141] libmachine: (addons-415393) DBG | domain addons-415393 has defined IP address 192.168.39.132 and MAC address 52:54:00:e8:ea:9e in network mk-addons-415393
I0317 10:26:23.487737 13120 main.go:141] libmachine: (addons-415393) DBG | Using SSH client type: external
I0317 10:26:23.487765 13120 main.go:141] libmachine: (addons-415393) DBG | Using SSH private key: /home/jenkins/minikube-integration/20535-5255/.minikube/machines/addons-415393/id_rsa (-rw-------)
I0317 10:26:23.487789 13120 main.go:141] libmachine: (addons-415393) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.132 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20535-5255/.minikube/machines/addons-415393/id_rsa -p 22] /usr/bin/ssh <nil>}
I0317 10:26:23.487803 13120 main.go:141] libmachine: (addons-415393) DBG | About to run SSH command:
I0317 10:26:23.487812 13120 main.go:141] libmachine: (addons-415393) DBG | exit 0
I0317 10:26:23.616692 13120 main.go:141] libmachine: (addons-415393) DBG | SSH cmd err, output: <nil>:
I0317 10:26:23.616963 13120 main.go:141] libmachine: (addons-415393) KVM machine creation complete
I0317 10:26:23.617239 13120 main.go:141] libmachine: (addons-415393) Calling .GetConfigRaw
I0317 10:26:23.617792 13120 main.go:141] libmachine: (addons-415393) Calling .DriverName
I0317 10:26:23.618023 13120 main.go:141] libmachine: (addons-415393) Calling .DriverName
I0317 10:26:23.618184 13120 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
I0317 10:26:23.618197 13120 main.go:141] libmachine: (addons-415393) Calling .GetState
I0317 10:26:23.619455 13120 main.go:141] libmachine: Detecting operating system of created instance...
I0317 10:26:23.619487 13120 main.go:141] libmachine: Waiting for SSH to be available...
I0317 10:26:23.619495 13120 main.go:141] libmachine: Getting to WaitForSSH function...
I0317 10:26:23.619502 13120 main.go:141] libmachine: (addons-415393) Calling .GetSSHHostname
I0317 10:26:23.621540 13120 main.go:141] libmachine: (addons-415393) DBG | domain addons-415393 has defined MAC address 52:54:00:e8:ea:9e in network mk-addons-415393
I0317 10:26:23.621913 13120 main.go:141] libmachine: (addons-415393) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:ea:9e", ip: ""} in network mk-addons-415393: {Iface:virbr1 ExpiryTime:2025-03-17 11:26:15 +0000 UTC Type:0 Mac:52:54:00:e8:ea:9e Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:addons-415393 Clientid:01:52:54:00:e8:ea:9e}
I0317 10:26:23.621938 13120 main.go:141] libmachine: (addons-415393) DBG | domain addons-415393 has defined IP address 192.168.39.132 and MAC address 52:54:00:e8:ea:9e in network mk-addons-415393
I0317 10:26:23.622094 13120 main.go:141] libmachine: (addons-415393) Calling .GetSSHPort
I0317 10:26:23.622261 13120 main.go:141] libmachine: (addons-415393) Calling .GetSSHKeyPath
I0317 10:26:23.622399 13120 main.go:141] libmachine: (addons-415393) Calling .GetSSHKeyPath
I0317 10:26:23.622544 13120 main.go:141] libmachine: (addons-415393) Calling .GetSSHUsername
I0317 10:26:23.622679 13120 main.go:141] libmachine: Using SSH client type: native
I0317 10:26:23.622875 13120 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil> [] 0s} 192.168.39.132 22 <nil> <nil>}
I0317 10:26:23.622884 13120 main.go:141] libmachine: About to run SSH command:
exit 0
I0317 10:26:23.723940 13120 main.go:141] libmachine: SSH cmd err, output: <nil>:
I0317 10:26:23.723966 13120 main.go:141] libmachine: Detecting the provisioner...
I0317 10:26:23.723974 13120 main.go:141] libmachine: (addons-415393) Calling .GetSSHHostname
I0317 10:26:23.726526 13120 main.go:141] libmachine: (addons-415393) DBG | domain addons-415393 has defined MAC address 52:54:00:e8:ea:9e in network mk-addons-415393
I0317 10:26:23.726915 13120 main.go:141] libmachine: (addons-415393) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:ea:9e", ip: ""} in network mk-addons-415393: {Iface:virbr1 ExpiryTime:2025-03-17 11:26:15 +0000 UTC Type:0 Mac:52:54:00:e8:ea:9e Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:addons-415393 Clientid:01:52:54:00:e8:ea:9e}
I0317 10:26:23.726949 13120 main.go:141] libmachine: (addons-415393) DBG | domain addons-415393 has defined IP address 192.168.39.132 and MAC address 52:54:00:e8:ea:9e in network mk-addons-415393
I0317 10:26:23.727102 13120 main.go:141] libmachine: (addons-415393) Calling .GetSSHPort
I0317 10:26:23.727305 13120 main.go:141] libmachine: (addons-415393) Calling .GetSSHKeyPath
I0317 10:26:23.727460 13120 main.go:141] libmachine: (addons-415393) Calling .GetSSHKeyPath
I0317 10:26:23.727586 13120 main.go:141] libmachine: (addons-415393) Calling .GetSSHUsername
I0317 10:26:23.727759 13120 main.go:141] libmachine: Using SSH client type: native
I0317 10:26:23.728007 13120 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil> [] 0s} 192.168.39.132 22 <nil> <nil>}
I0317 10:26:23.728019 13120 main.go:141] libmachine: About to run SSH command:
cat /etc/os-release
I0317 10:26:23.829309 13120 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
VERSION=2023.02.9-dirty
ID=buildroot
VERSION_ID=2023.02.9
PRETTY_NAME="Buildroot 2023.02.9"
I0317 10:26:23.829403 13120 main.go:141] libmachine: found compatible host: buildroot
I0317 10:26:23.829420 13120 main.go:141] libmachine: Provisioning with buildroot...
I0317 10:26:23.829427 13120 main.go:141] libmachine: (addons-415393) Calling .GetMachineName
I0317 10:26:23.829703 13120 buildroot.go:166] provisioning hostname "addons-415393"
I0317 10:26:23.829727 13120 main.go:141] libmachine: (addons-415393) Calling .GetMachineName
I0317 10:26:23.829908 13120 main.go:141] libmachine: (addons-415393) Calling .GetSSHHostname
I0317 10:26:23.832518 13120 main.go:141] libmachine: (addons-415393) DBG | domain addons-415393 has defined MAC address 52:54:00:e8:ea:9e in network mk-addons-415393
I0317 10:26:23.832864 13120 main.go:141] libmachine: (addons-415393) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:ea:9e", ip: ""} in network mk-addons-415393: {Iface:virbr1 ExpiryTime:2025-03-17 11:26:15 +0000 UTC Type:0 Mac:52:54:00:e8:ea:9e Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:addons-415393 Clientid:01:52:54:00:e8:ea:9e}
I0317 10:26:23.832891 13120 main.go:141] libmachine: (addons-415393) DBG | domain addons-415393 has defined IP address 192.168.39.132 and MAC address 52:54:00:e8:ea:9e in network mk-addons-415393
I0317 10:26:23.833039 13120 main.go:141] libmachine: (addons-415393) Calling .GetSSHPort
I0317 10:26:23.833284 13120 main.go:141] libmachine: (addons-415393) Calling .GetSSHKeyPath
I0317 10:26:23.833442 13120 main.go:141] libmachine: (addons-415393) Calling .GetSSHKeyPath
I0317 10:26:23.833605 13120 main.go:141] libmachine: (addons-415393) Calling .GetSSHUsername
I0317 10:26:23.833779 13120 main.go:141] libmachine: Using SSH client type: native
I0317 10:26:23.833987 13120 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil> [] 0s} 192.168.39.132 22 <nil> <nil>}
I0317 10:26:23.833999 13120 main.go:141] libmachine: About to run SSH command:
sudo hostname addons-415393 && echo "addons-415393" | sudo tee /etc/hostname
I0317 10:26:23.946556 13120 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-415393
I0317 10:26:23.946585 13120 main.go:141] libmachine: (addons-415393) Calling .GetSSHHostname
I0317 10:26:23.949319 13120 main.go:141] libmachine: (addons-415393) DBG | domain addons-415393 has defined MAC address 52:54:00:e8:ea:9e in network mk-addons-415393
I0317 10:26:23.949647 13120 main.go:141] libmachine: (addons-415393) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:ea:9e", ip: ""} in network mk-addons-415393: {Iface:virbr1 ExpiryTime:2025-03-17 11:26:15 +0000 UTC Type:0 Mac:52:54:00:e8:ea:9e Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:addons-415393 Clientid:01:52:54:00:e8:ea:9e}
I0317 10:26:23.949694 13120 main.go:141] libmachine: (addons-415393) DBG | domain addons-415393 has defined IP address 192.168.39.132 and MAC address 52:54:00:e8:ea:9e in network mk-addons-415393
I0317 10:26:23.949834 13120 main.go:141] libmachine: (addons-415393) Calling .GetSSHPort
I0317 10:26:23.950015 13120 main.go:141] libmachine: (addons-415393) Calling .GetSSHKeyPath
I0317 10:26:23.950175 13120 main.go:141] libmachine: (addons-415393) Calling .GetSSHKeyPath
I0317 10:26:23.950278 13120 main.go:141] libmachine: (addons-415393) Calling .GetSSHUsername
I0317 10:26:23.950433 13120 main.go:141] libmachine: Using SSH client type: native
I0317 10:26:23.950648 13120 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil> [] 0s} 192.168.39.132 22 <nil> <nil>}
I0317 10:26:23.950670 13120 main.go:141] libmachine: About to run SSH command:
if ! grep -xq '.*\saddons-415393' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-415393/g' /etc/hosts;
else
echo '127.0.1.1 addons-415393' | sudo tee -a /etc/hosts;
fi
fi
I0317 10:26:24.057036 13120 main.go:141] libmachine: SSH cmd err, output: <nil>:
I0317 10:26:24.057067 13120 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20535-5255/.minikube CaCertPath:/home/jenkins/minikube-integration/20535-5255/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20535-5255/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20535-5255/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20535-5255/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20535-5255/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20535-5255/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20535-5255/.minikube}
I0317 10:26:24.057084 13120 buildroot.go:174] setting up certificates
I0317 10:26:24.057108 13120 provision.go:84] configureAuth start
I0317 10:26:24.057121 13120 main.go:141] libmachine: (addons-415393) Calling .GetMachineName
I0317 10:26:24.057436 13120 main.go:141] libmachine: (addons-415393) Calling .GetIP
I0317 10:26:24.060316 13120 main.go:141] libmachine: (addons-415393) DBG | domain addons-415393 has defined MAC address 52:54:00:e8:ea:9e in network mk-addons-415393
I0317 10:26:24.060644 13120 main.go:141] libmachine: (addons-415393) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:ea:9e", ip: ""} in network mk-addons-415393: {Iface:virbr1 ExpiryTime:2025-03-17 11:26:15 +0000 UTC Type:0 Mac:52:54:00:e8:ea:9e Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:addons-415393 Clientid:01:52:54:00:e8:ea:9e}
I0317 10:26:24.060683 13120 main.go:141] libmachine: (addons-415393) DBG | domain addons-415393 has defined IP address 192.168.39.132 and MAC address 52:54:00:e8:ea:9e in network mk-addons-415393
I0317 10:26:24.060847 13120 main.go:141] libmachine: (addons-415393) Calling .GetSSHHostname
I0317 10:26:24.063343 13120 main.go:141] libmachine: (addons-415393) DBG | domain addons-415393 has defined MAC address 52:54:00:e8:ea:9e in network mk-addons-415393
I0317 10:26:24.063680 13120 main.go:141] libmachine: (addons-415393) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:ea:9e", ip: ""} in network mk-addons-415393: {Iface:virbr1 ExpiryTime:2025-03-17 11:26:15 +0000 UTC Type:0 Mac:52:54:00:e8:ea:9e Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:addons-415393 Clientid:01:52:54:00:e8:ea:9e}
I0317 10:26:24.063705 13120 main.go:141] libmachine: (addons-415393) DBG | domain addons-415393 has defined IP address 192.168.39.132 and MAC address 52:54:00:e8:ea:9e in network mk-addons-415393
I0317 10:26:24.063865 13120 provision.go:143] copyHostCerts
I0317 10:26:24.063931 13120 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20535-5255/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20535-5255/.minikube/key.pem (1675 bytes)
I0317 10:26:24.064085 13120 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20535-5255/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20535-5255/.minikube/ca.pem (1082 bytes)
I0317 10:26:24.064189 13120 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20535-5255/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20535-5255/.minikube/cert.pem (1123 bytes)
I0317 10:26:24.064311 13120 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20535-5255/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20535-5255/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20535-5255/.minikube/certs/ca-key.pem org=jenkins.addons-415393 san=[127.0.0.1 192.168.39.132 addons-415393 localhost minikube]
I0317 10:26:24.308660 13120 provision.go:177] copyRemoteCerts
I0317 10:26:24.308732 13120 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I0317 10:26:24.308754 13120 main.go:141] libmachine: (addons-415393) Calling .GetSSHHostname
I0317 10:26:24.311327 13120 main.go:141] libmachine: (addons-415393) DBG | domain addons-415393 has defined MAC address 52:54:00:e8:ea:9e in network mk-addons-415393
I0317 10:26:24.311718 13120 main.go:141] libmachine: (addons-415393) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:ea:9e", ip: ""} in network mk-addons-415393: {Iface:virbr1 ExpiryTime:2025-03-17 11:26:15 +0000 UTC Type:0 Mac:52:54:00:e8:ea:9e Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:addons-415393 Clientid:01:52:54:00:e8:ea:9e}
I0317 10:26:24.311742 13120 main.go:141] libmachine: (addons-415393) DBG | domain addons-415393 has defined IP address 192.168.39.132 and MAC address 52:54:00:e8:ea:9e in network mk-addons-415393
I0317 10:26:24.311971 13120 main.go:141] libmachine: (addons-415393) Calling .GetSSHPort
I0317 10:26:24.312160 13120 main.go:141] libmachine: (addons-415393) Calling .GetSSHKeyPath
I0317 10:26:24.312389 13120 main.go:141] libmachine: (addons-415393) Calling .GetSSHUsername
I0317 10:26:24.312536 13120 sshutil.go:53] new ssh client: &{IP:192.168.39.132 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20535-5255/.minikube/machines/addons-415393/id_rsa Username:docker}
I0317 10:26:24.395728 13120 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20535-5255/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
I0317 10:26:24.420518 13120 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20535-5255/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
I0317 10:26:24.443021 13120 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20535-5255/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
I0317 10:26:24.465997 13120 provision.go:87] duration metric: took 408.876357ms to configureAuth
I0317 10:26:24.466023 13120 buildroot.go:189] setting minikube options for container-runtime
I0317 10:26:24.466197 13120 config.go:182] Loaded profile config "addons-415393": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
I0317 10:26:24.466282 13120 main.go:141] libmachine: (addons-415393) Calling .GetSSHHostname
I0317 10:26:24.469059 13120 main.go:141] libmachine: (addons-415393) DBG | domain addons-415393 has defined MAC address 52:54:00:e8:ea:9e in network mk-addons-415393
I0317 10:26:24.469371 13120 main.go:141] libmachine: (addons-415393) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:ea:9e", ip: ""} in network mk-addons-415393: {Iface:virbr1 ExpiryTime:2025-03-17 11:26:15 +0000 UTC Type:0 Mac:52:54:00:e8:ea:9e Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:addons-415393 Clientid:01:52:54:00:e8:ea:9e}
I0317 10:26:24.469405 13120 main.go:141] libmachine: (addons-415393) DBG | domain addons-415393 has defined IP address 192.168.39.132 and MAC address 52:54:00:e8:ea:9e in network mk-addons-415393
I0317 10:26:24.469586 13120 main.go:141] libmachine: (addons-415393) Calling .GetSSHPort
I0317 10:26:24.469800 13120 main.go:141] libmachine: (addons-415393) Calling .GetSSHKeyPath
I0317 10:26:24.469950 13120 main.go:141] libmachine: (addons-415393) Calling .GetSSHKeyPath
I0317 10:26:24.470071 13120 main.go:141] libmachine: (addons-415393) Calling .GetSSHUsername
I0317 10:26:24.470227 13120 main.go:141] libmachine: Using SSH client type: native
I0317 10:26:24.470439 13120 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil> [] 0s} 192.168.39.132 22 <nil> <nil>}
I0317 10:26:24.470453 13120 main.go:141] libmachine: About to run SSH command:
sudo mkdir -p /etc/sysconfig && printf %s "
CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
I0317 10:26:24.694520 13120 main.go:141] libmachine: SSH cmd err, output: <nil>:
CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
I0317 10:26:24.694545 13120 main.go:141] libmachine: Checking connection to Docker...
I0317 10:26:24.694573 13120 main.go:141] libmachine: (addons-415393) Calling .GetURL
I0317 10:26:24.695762 13120 main.go:141] libmachine: (addons-415393) DBG | using libvirt version 6000000
I0317 10:26:24.697781 13120 main.go:141] libmachine: (addons-415393) DBG | domain addons-415393 has defined MAC address 52:54:00:e8:ea:9e in network mk-addons-415393
I0317 10:26:24.698105 13120 main.go:141] libmachine: (addons-415393) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:ea:9e", ip: ""} in network mk-addons-415393: {Iface:virbr1 ExpiryTime:2025-03-17 11:26:15 +0000 UTC Type:0 Mac:52:54:00:e8:ea:9e Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:addons-415393 Clientid:01:52:54:00:e8:ea:9e}
I0317 10:26:24.698137 13120 main.go:141] libmachine: (addons-415393) DBG | domain addons-415393 has defined IP address 192.168.39.132 and MAC address 52:54:00:e8:ea:9e in network mk-addons-415393
I0317 10:26:24.698275 13120 main.go:141] libmachine: Docker is up and running!
I0317 10:26:24.698287 13120 main.go:141] libmachine: Reticulating splines...
I0317 10:26:24.698294 13120 client.go:171] duration metric: took 24.604705024s to LocalClient.Create
I0317 10:26:24.698319 13120 start.go:167] duration metric: took 24.604788559s to libmachine.API.Create "addons-415393"
I0317 10:26:24.698329 13120 start.go:293] postStartSetup for "addons-415393" (driver="kvm2")
I0317 10:26:24.698337 13120 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0317 10:26:24.698352 13120 main.go:141] libmachine: (addons-415393) Calling .DriverName
I0317 10:26:24.698605 13120 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0317 10:26:24.698627 13120 main.go:141] libmachine: (addons-415393) Calling .GetSSHHostname
I0317 10:26:24.700939 13120 main.go:141] libmachine: (addons-415393) DBG | domain addons-415393 has defined MAC address 52:54:00:e8:ea:9e in network mk-addons-415393
I0317 10:26:24.701182 13120 main.go:141] libmachine: (addons-415393) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:ea:9e", ip: ""} in network mk-addons-415393: {Iface:virbr1 ExpiryTime:2025-03-17 11:26:15 +0000 UTC Type:0 Mac:52:54:00:e8:ea:9e Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:addons-415393 Clientid:01:52:54:00:e8:ea:9e}
I0317 10:26:24.701215 13120 main.go:141] libmachine: (addons-415393) DBG | domain addons-415393 has defined IP address 192.168.39.132 and MAC address 52:54:00:e8:ea:9e in network mk-addons-415393
I0317 10:26:24.701376 13120 main.go:141] libmachine: (addons-415393) Calling .GetSSHPort
I0317 10:26:24.701547 13120 main.go:141] libmachine: (addons-415393) Calling .GetSSHKeyPath
I0317 10:26:24.701679 13120 main.go:141] libmachine: (addons-415393) Calling .GetSSHUsername
I0317 10:26:24.701779 13120 sshutil.go:53] new ssh client: &{IP:192.168.39.132 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20535-5255/.minikube/machines/addons-415393/id_rsa Username:docker}
I0317 10:26:24.782739 13120 ssh_runner.go:195] Run: cat /etc/os-release
I0317 10:26:24.786696 13120 info.go:137] Remote host: Buildroot 2023.02.9
I0317 10:26:24.786731 13120 filesync.go:126] Scanning /home/jenkins/minikube-integration/20535-5255/.minikube/addons for local assets ...
I0317 10:26:24.786815 13120 filesync.go:126] Scanning /home/jenkins/minikube-integration/20535-5255/.minikube/files for local assets ...
I0317 10:26:24.786853 13120 start.go:296] duration metric: took 88.517641ms for postStartSetup
I0317 10:26:24.786889 13120 main.go:141] libmachine: (addons-415393) Calling .GetConfigRaw
I0317 10:26:24.787518 13120 main.go:141] libmachine: (addons-415393) Calling .GetIP
I0317 10:26:24.790080 13120 main.go:141] libmachine: (addons-415393) DBG | domain addons-415393 has defined MAC address 52:54:00:e8:ea:9e in network mk-addons-415393
I0317 10:26:24.790542 13120 main.go:141] libmachine: (addons-415393) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:ea:9e", ip: ""} in network mk-addons-415393: {Iface:virbr1 ExpiryTime:2025-03-17 11:26:15 +0000 UTC Type:0 Mac:52:54:00:e8:ea:9e Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:addons-415393 Clientid:01:52:54:00:e8:ea:9e}
I0317 10:26:24.790571 13120 main.go:141] libmachine: (addons-415393) DBG | domain addons-415393 has defined IP address 192.168.39.132 and MAC address 52:54:00:e8:ea:9e in network mk-addons-415393
I0317 10:26:24.790849 13120 profile.go:143] Saving config to /home/jenkins/minikube-integration/20535-5255/.minikube/profiles/addons-415393/config.json ...
I0317 10:26:24.791051 13120 start.go:128] duration metric: took 24.716683634s to createHost
I0317 10:26:24.791075 13120 main.go:141] libmachine: (addons-415393) Calling .GetSSHHostname
I0317 10:26:24.793307 13120 main.go:141] libmachine: (addons-415393) DBG | domain addons-415393 has defined MAC address 52:54:00:e8:ea:9e in network mk-addons-415393
I0317 10:26:24.793618 13120 main.go:141] libmachine: (addons-415393) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:ea:9e", ip: ""} in network mk-addons-415393: {Iface:virbr1 ExpiryTime:2025-03-17 11:26:15 +0000 UTC Type:0 Mac:52:54:00:e8:ea:9e Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:addons-415393 Clientid:01:52:54:00:e8:ea:9e}
I0317 10:26:24.793647 13120 main.go:141] libmachine: (addons-415393) DBG | domain addons-415393 has defined IP address 192.168.39.132 and MAC address 52:54:00:e8:ea:9e in network mk-addons-415393
I0317 10:26:24.793734 13120 main.go:141] libmachine: (addons-415393) Calling .GetSSHPort
I0317 10:26:24.793926 13120 main.go:141] libmachine: (addons-415393) Calling .GetSSHKeyPath
I0317 10:26:24.794058 13120 main.go:141] libmachine: (addons-415393) Calling .GetSSHKeyPath
I0317 10:26:24.794182 13120 main.go:141] libmachine: (addons-415393) Calling .GetSSHUsername
I0317 10:26:24.794343 13120 main.go:141] libmachine: Using SSH client type: native
I0317 10:26:24.794595 13120 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil> [] 0s} 192.168.39.132 22 <nil> <nil>}
I0317 10:26:24.794607 13120 main.go:141] libmachine: About to run SSH command:
date +%s.%N
I0317 10:26:24.897224 13120 main.go:141] libmachine: SSH cmd err, output: <nil>: 1742207184.873136991
I0317 10:26:24.897253 13120 fix.go:216] guest clock: 1742207184.873136991
I0317 10:26:24.897260 13120 fix.go:229] Guest: 2025-03-17 10:26:24.873136991 +0000 UTC Remote: 2025-03-17 10:26:24.791062535 +0000 UTC m=+24.823036194 (delta=82.074456ms)
I0317 10:26:24.897277 13120 fix.go:200] guest clock delta is within tolerance: 82.074456ms
I0317 10:26:24.897282 13120 start.go:83] releasing machines lock for "addons-415393", held for 24.822989185s
I0317 10:26:24.897316 13120 main.go:141] libmachine: (addons-415393) Calling .DriverName
I0317 10:26:24.897577 13120 main.go:141] libmachine: (addons-415393) Calling .GetIP
I0317 10:26:24.900328 13120 main.go:141] libmachine: (addons-415393) DBG | domain addons-415393 has defined MAC address 52:54:00:e8:ea:9e in network mk-addons-415393
I0317 10:26:24.900771 13120 main.go:141] libmachine: (addons-415393) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:ea:9e", ip: ""} in network mk-addons-415393: {Iface:virbr1 ExpiryTime:2025-03-17 11:26:15 +0000 UTC Type:0 Mac:52:54:00:e8:ea:9e Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:addons-415393 Clientid:01:52:54:00:e8:ea:9e}
I0317 10:26:24.900798 13120 main.go:141] libmachine: (addons-415393) DBG | domain addons-415393 has defined IP address 192.168.39.132 and MAC address 52:54:00:e8:ea:9e in network mk-addons-415393
I0317 10:26:24.900946 13120 main.go:141] libmachine: (addons-415393) Calling .DriverName
I0317 10:26:24.901418 13120 main.go:141] libmachine: (addons-415393) Calling .DriverName
I0317 10:26:24.901604 13120 main.go:141] libmachine: (addons-415393) Calling .DriverName
I0317 10:26:24.901695 13120 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
I0317 10:26:24.901733 13120 main.go:141] libmachine: (addons-415393) Calling .GetSSHHostname
I0317 10:26:24.901840 13120 ssh_runner.go:195] Run: cat /version.json
I0317 10:26:24.901865 13120 main.go:141] libmachine: (addons-415393) Calling .GetSSHHostname
I0317 10:26:24.904568 13120 main.go:141] libmachine: (addons-415393) DBG | domain addons-415393 has defined MAC address 52:54:00:e8:ea:9e in network mk-addons-415393
I0317 10:26:24.904824 13120 main.go:141] libmachine: (addons-415393) DBG | domain addons-415393 has defined MAC address 52:54:00:e8:ea:9e in network mk-addons-415393
I0317 10:26:24.905020 13120 main.go:141] libmachine: (addons-415393) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:ea:9e", ip: ""} in network mk-addons-415393: {Iface:virbr1 ExpiryTime:2025-03-17 11:26:15 +0000 UTC Type:0 Mac:52:54:00:e8:ea:9e Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:addons-415393 Clientid:01:52:54:00:e8:ea:9e}
I0317 10:26:24.905044 13120 main.go:141] libmachine: (addons-415393) DBG | domain addons-415393 has defined IP address 192.168.39.132 and MAC address 52:54:00:e8:ea:9e in network mk-addons-415393
I0317 10:26:24.905151 13120 main.go:141] libmachine: (addons-415393) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:ea:9e", ip: ""} in network mk-addons-415393: {Iface:virbr1 ExpiryTime:2025-03-17 11:26:15 +0000 UTC Type:0 Mac:52:54:00:e8:ea:9e Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:addons-415393 Clientid:01:52:54:00:e8:ea:9e}
I0317 10:26:24.905177 13120 main.go:141] libmachine: (addons-415393) DBG | domain addons-415393 has defined IP address 192.168.39.132 and MAC address 52:54:00:e8:ea:9e in network mk-addons-415393
I0317 10:26:24.905197 13120 main.go:141] libmachine: (addons-415393) Calling .GetSSHPort
I0317 10:26:24.905408 13120 main.go:141] libmachine: (addons-415393) Calling .GetSSHPort
I0317 10:26:24.905409 13120 main.go:141] libmachine: (addons-415393) Calling .GetSSHKeyPath
I0317 10:26:24.905522 13120 main.go:141] libmachine: (addons-415393) Calling .GetSSHKeyPath
I0317 10:26:24.905590 13120 main.go:141] libmachine: (addons-415393) Calling .GetSSHUsername
I0317 10:26:24.905649 13120 main.go:141] libmachine: (addons-415393) Calling .GetSSHUsername
I0317 10:26:24.905701 13120 sshutil.go:53] new ssh client: &{IP:192.168.39.132 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20535-5255/.minikube/machines/addons-415393/id_rsa Username:docker}
I0317 10:26:24.905757 13120 sshutil.go:53] new ssh client: &{IP:192.168.39.132 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20535-5255/.minikube/machines/addons-415393/id_rsa Username:docker}
I0317 10:26:25.026354 13120 ssh_runner.go:195] Run: systemctl --version
I0317 10:26:25.032305 13120 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
I0317 10:26:25.189879 13120 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
W0317 10:26:25.195293 13120 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
I0317 10:26:25.195359 13120 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
I0317 10:26:25.210484 13120 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
I0317 10:26:25.210507 13120 start.go:495] detecting cgroup driver to use...
I0317 10:26:25.210562 13120 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
I0317 10:26:25.225916 13120 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
I0317 10:26:25.239707 13120 docker.go:217] disabling cri-docker service (if available) ...
I0317 10:26:25.239770 13120 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
I0317 10:26:25.252866 13120 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
I0317 10:26:25.265996 13120 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
I0317 10:26:25.386913 13120 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
I0317 10:26:25.528132 13120 docker.go:233] disabling docker service ...
I0317 10:26:25.528205 13120 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
I0317 10:26:25.541781 13120 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
I0317 10:26:25.553815 13120 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
I0317 10:26:25.690206 13120 ssh_runner.go:195] Run: sudo systemctl mask docker.service
I0317 10:26:25.819536 13120 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
I0317 10:26:25.842473 13120 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
" | sudo tee /etc/crictl.yaml"
I0317 10:26:25.860380 13120 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
I0317 10:26:25.860442 13120 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
I0317 10:26:25.870980 13120 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
I0317 10:26:25.871043 13120 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
I0317 10:26:25.881639 13120 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
I0317 10:26:25.891580 13120 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
I0317 10:26:25.901522 13120 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I0317 10:26:25.911819 13120 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
I0317 10:26:25.921664 13120 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
I0317 10:26:25.938395 13120 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
I0317 10:26:25.948327 13120 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I0317 10:26:25.957563 13120 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
stdout:
stderr:
sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
I0317 10:26:25.957623 13120 ssh_runner.go:195] Run: sudo modprobe br_netfilter
I0317 10:26:25.970560 13120 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I0317 10:26:25.979834 13120 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0317 10:26:26.111038 13120 ssh_runner.go:195] Run: sudo systemctl restart crio
I0317 10:26:26.195961 13120 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
I0317 10:26:26.196071 13120 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
I0317 10:26:26.200580 13120 start.go:563] Will wait 60s for crictl version
I0317 10:26:26.200657 13120 ssh_runner.go:195] Run: which crictl
I0317 10:26:26.204027 13120 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
I0317 10:26:26.241288 13120 start.go:579] Version: 0.1.0
RuntimeName: cri-o
RuntimeVersion: 1.29.1
RuntimeApiVersion: v1
I0317 10:26:26.241416 13120 ssh_runner.go:195] Run: crio --version
I0317 10:26:26.269205 13120 ssh_runner.go:195] Run: crio --version
I0317 10:26:26.299521 13120 out.go:177] * Preparing Kubernetes v1.32.2 on CRI-O 1.29.1 ...
I0317 10:26:26.300692 13120 main.go:141] libmachine: (addons-415393) Calling .GetIP
I0317 10:26:26.303320 13120 main.go:141] libmachine: (addons-415393) DBG | domain addons-415393 has defined MAC address 52:54:00:e8:ea:9e in network mk-addons-415393
I0317 10:26:26.303627 13120 main.go:141] libmachine: (addons-415393) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:ea:9e", ip: ""} in network mk-addons-415393: {Iface:virbr1 ExpiryTime:2025-03-17 11:26:15 +0000 UTC Type:0 Mac:52:54:00:e8:ea:9e Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:addons-415393 Clientid:01:52:54:00:e8:ea:9e}
I0317 10:26:26.303653 13120 main.go:141] libmachine: (addons-415393) DBG | domain addons-415393 has defined IP address 192.168.39.132 and MAC address 52:54:00:e8:ea:9e in network mk-addons-415393
I0317 10:26:26.303835 13120 ssh_runner.go:195] Run: grep 192.168.39.1 host.minikube.internal$ /etc/hosts
I0317 10:26:26.307788 13120 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0317 10:26:26.319370 13120 kubeadm.go:883] updating cluster {Name:addons-415393 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:addons-415393 Na
mespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.132 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryM
irror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
I0317 10:26:26.319488 13120 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime crio
I0317 10:26:26.319531 13120 ssh_runner.go:195] Run: sudo crictl images --output json
I0317 10:26:26.351924 13120 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.32.2". assuming images are not preloaded.
I0317 10:26:26.351987 13120 ssh_runner.go:195] Run: which lz4
I0317 10:26:26.355829 13120 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
I0317 10:26:26.359928 13120 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
stdout:
stderr:
stat: cannot statx '/preloaded.tar.lz4': No such file or directory
I0317 10:26:26.359956 13120 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20535-5255/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (399124012 bytes)
I0317 10:26:27.550626 13120 crio.go:462] duration metric: took 1.194838775s to copy over tarball
I0317 10:26:27.550689 13120 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
I0317 10:26:29.785940 13120 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.235230898s)
I0317 10:26:29.785966 13120 crio.go:469] duration metric: took 2.235314655s to extract the tarball
I0317 10:26:29.785973 13120 ssh_runner.go:146] rm: /preloaded.tar.lz4
I0317 10:26:29.822549 13120 ssh_runner.go:195] Run: sudo crictl images --output json
I0317 10:26:29.865745 13120 crio.go:514] all images are preloaded for cri-o runtime.
I0317 10:26:29.865767 13120 cache_images.go:84] Images are preloaded, skipping loading
I0317 10:26:29.865774 13120 kubeadm.go:934] updating node { 192.168.39.132 8443 v1.32.2 crio true true} ...
I0317 10:26:29.865866 13120 kubeadm.go:946] kubelet [Unit]
Wants=crio.service
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.32.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-415393 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.132
[Install]
config:
{KubernetesVersion:v1.32.2 ClusterName:addons-415393 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
I0317 10:26:29.865931 13120 ssh_runner.go:195] Run: crio config
I0317 10:26:29.911520 13120 cni.go:84] Creating CNI manager for ""
I0317 10:26:29.911544 13120 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
I0317 10:26:29.911557 13120 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
I0317 10:26:29.911576 13120 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.132 APIServerPort:8443 KubernetesVersion:v1.32.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-415393 NodeName:addons-415393 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.132"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.132 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
I0317 10:26:29.911714 13120 kubeadm.go:195] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta4
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.39.132
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: unix:///var/run/crio/crio.sock
name: "addons-415393"
kubeletExtraArgs:
- name: "node-ip"
value: "192.168.39.132"
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta4
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.168.39.132"]
extraArgs:
- name: "enable-admission-plugins"
value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
- name: "allocate-node-cidrs"
value: "true"
- name: "leader-elect"
value: "false"
scheduler:
extraArgs:
- name: "leader-elect"
value: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
etcd:
local:
dataDir: /var/lib/minikube/etcd
extraArgs:
- name: "proxy-refresh-interval"
value: "70000"
kubernetesVersion: v1.32.2
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: cgroupfs
containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
hairpinMode: hairpin-veth
runtimeRequestTimeout: 15m
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%"
nodefs.inodesFree: "0%"
imagefs.available: "0%"
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
tcpCloseWaitTimeout: 0s
I0317 10:26:29.911771 13120 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.2
I0317 10:26:29.921545 13120 binaries.go:44] Found k8s binaries, skipping transfer
I0317 10:26:29.921607 13120 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I0317 10:26:29.930208 13120 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
I0317 10:26:29.945699 13120 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
I0317 10:26:29.961362 13120 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2293 bytes)
I0317 10:26:29.977468 13120 ssh_runner.go:195] Run: grep 192.168.39.132 control-plane.minikube.internal$ /etc/hosts
I0317 10:26:29.981026 13120 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.132 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0317 10:26:29.992276 13120 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0317 10:26:30.107224 13120 ssh_runner.go:195] Run: sudo systemctl start kubelet
I0317 10:26:30.124004 13120 certs.go:68] Setting up /home/jenkins/minikube-integration/20535-5255/.minikube/profiles/addons-415393 for IP: 192.168.39.132
I0317 10:26:30.124029 13120 certs.go:194] generating shared ca certs ...
I0317 10:26:30.124044 13120 certs.go:226] acquiring lock for ca certs: {Name:mk5d3e1677f54e8fc12769f72530f70840ba64b8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0317 10:26:30.124181 13120 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/20535-5255/.minikube/ca.key
I0317 10:26:30.247449 13120 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20535-5255/.minikube/ca.crt ...
I0317 10:26:30.247476 13120 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20535-5255/.minikube/ca.crt: {Name:mk9c0f70d92ae672d22c4cb5130dfaab120e5aba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0317 10:26:30.247630 13120 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20535-5255/.minikube/ca.key ...
I0317 10:26:30.247640 13120 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20535-5255/.minikube/ca.key: {Name:mk1ff1b3fe5940091f927a5eeafbfac66b7dc8aa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0317 10:26:30.247709 13120 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20535-5255/.minikube/proxy-client-ca.key
I0317 10:26:30.379570 13120 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20535-5255/.minikube/proxy-client-ca.crt ...
I0317 10:26:30.379599 13120 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20535-5255/.minikube/proxy-client-ca.crt: {Name:mkd544e3d49d7d3c22c48da25456edce24adc99b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0317 10:26:30.379742 13120 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20535-5255/.minikube/proxy-client-ca.key ...
I0317 10:26:30.379752 13120 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20535-5255/.minikube/proxy-client-ca.key: {Name:mka3c63716e6ff5665be66e58d52d6430e883f69 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0317 10:26:30.379821 13120 certs.go:256] generating profile certs ...
I0317 10:26:30.379872 13120 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20535-5255/.minikube/profiles/addons-415393/client.key
I0317 10:26:30.379891 13120 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20535-5255/.minikube/profiles/addons-415393/client.crt with IP's: []
I0317 10:26:30.605505 13120 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20535-5255/.minikube/profiles/addons-415393/client.crt ...
I0317 10:26:30.605532 13120 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20535-5255/.minikube/profiles/addons-415393/client.crt: {Name:mk59ffce2a0ec80ccbf5a26d4bc37942c4ca1890 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0317 10:26:30.605681 13120 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20535-5255/.minikube/profiles/addons-415393/client.key ...
I0317 10:26:30.605691 13120 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20535-5255/.minikube/profiles/addons-415393/client.key: {Name:mk0c28eabb0e5bd6364da303d2609e372d6db322 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0317 10:26:30.605755 13120 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20535-5255/.minikube/profiles/addons-415393/apiserver.key.d78e3ccd
I0317 10:26:30.605773 13120 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20535-5255/.minikube/profiles/addons-415393/apiserver.crt.d78e3ccd with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.132]
I0317 10:26:30.798723 13120 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20535-5255/.minikube/profiles/addons-415393/apiserver.crt.d78e3ccd ...
I0317 10:26:30.798751 13120 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20535-5255/.minikube/profiles/addons-415393/apiserver.crt.d78e3ccd: {Name:mk3992eac304048d828ed73072bf90ce521a9c44 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0317 10:26:30.798900 13120 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20535-5255/.minikube/profiles/addons-415393/apiserver.key.d78e3ccd ...
I0317 10:26:30.798915 13120 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20535-5255/.minikube/profiles/addons-415393/apiserver.key.d78e3ccd: {Name:mkdc924d9e4413b8dcafad96645c5578787b6b04 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0317 10:26:30.798982 13120 certs.go:381] copying /home/jenkins/minikube-integration/20535-5255/.minikube/profiles/addons-415393/apiserver.crt.d78e3ccd -> /home/jenkins/minikube-integration/20535-5255/.minikube/profiles/addons-415393/apiserver.crt
I0317 10:26:30.799065 13120 certs.go:385] copying /home/jenkins/minikube-integration/20535-5255/.minikube/profiles/addons-415393/apiserver.key.d78e3ccd -> /home/jenkins/minikube-integration/20535-5255/.minikube/profiles/addons-415393/apiserver.key
I0317 10:26:30.799116 13120 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20535-5255/.minikube/profiles/addons-415393/proxy-client.key
I0317 10:26:30.799132 13120 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20535-5255/.minikube/profiles/addons-415393/proxy-client.crt with IP's: []
I0317 10:26:30.852401 13120 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20535-5255/.minikube/profiles/addons-415393/proxy-client.crt ...
I0317 10:26:30.852430 13120 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20535-5255/.minikube/profiles/addons-415393/proxy-client.crt: {Name:mk0c15d2ae478bf66932405e7ecf3f1007d190a1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0317 10:26:30.853042 13120 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20535-5255/.minikube/profiles/addons-415393/proxy-client.key ...
I0317 10:26:30.853056 13120 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20535-5255/.minikube/profiles/addons-415393/proxy-client.key: {Name:mk5f2fdf1ea39597ece7473b198296ea945075a5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0317 10:26:30.853240 13120 certs.go:484] found cert: /home/jenkins/minikube-integration/20535-5255/.minikube/certs/ca-key.pem (1675 bytes)
I0317 10:26:30.853276 13120 certs.go:484] found cert: /home/jenkins/minikube-integration/20535-5255/.minikube/certs/ca.pem (1082 bytes)
I0317 10:26:30.853299 13120 certs.go:484] found cert: /home/jenkins/minikube-integration/20535-5255/.minikube/certs/cert.pem (1123 bytes)
I0317 10:26:30.853322 13120 certs.go:484] found cert: /home/jenkins/minikube-integration/20535-5255/.minikube/certs/key.pem (1675 bytes)
I0317 10:26:30.853917 13120 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20535-5255/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I0317 10:26:30.880740 13120 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20535-5255/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
I0317 10:26:30.902889 13120 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20535-5255/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I0317 10:26:30.924776 13120 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20535-5255/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
I0317 10:26:30.947348 13120 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20535-5255/.minikube/profiles/addons-415393/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
I0317 10:26:30.969530 13120 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20535-5255/.minikube/profiles/addons-415393/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
I0317 10:26:30.991292 13120 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20535-5255/.minikube/profiles/addons-415393/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I0317 10:26:31.013318 13120 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20535-5255/.minikube/profiles/addons-415393/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
I0317 10:26:31.035522 13120 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20535-5255/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I0317 10:26:31.057600 13120 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
I0317 10:26:31.073217 13120 ssh_runner.go:195] Run: openssl version
I0317 10:26:31.078758 13120 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I0317 10:26:31.089188 13120 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I0317 10:26:31.093379 13120 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 17 10:26 /usr/share/ca-certificates/minikubeCA.pem
I0317 10:26:31.093436 13120 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I0317 10:26:31.099047 13120 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I0317 10:26:31.109398 13120 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
I0317 10:26:31.113222 13120 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
stdout:
stderr:
stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
I0317 10:26:31.113286 13120 kubeadm.go:392] StartCluster: {Name:addons-415393 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:addons-415393 Names
pace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.132 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirr
or: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0317 10:26:31.113357 13120 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
I0317 10:26:31.113415 13120 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
I0317 10:26:31.148024 13120 cri.go:89] found id: ""
I0317 10:26:31.148101 13120 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I0317 10:26:31.157372 13120 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I0317 10:26:31.166191 13120 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I0317 10:26:31.174991 13120 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I0317 10:26:31.175007 13120 kubeadm.go:157] found existing configuration files:
I0317 10:26:31.175052 13120 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
I0317 10:26:31.183441 13120 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/admin.conf: No such file or directory
I0317 10:26:31.183494 13120 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
I0317 10:26:31.192358 13120 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
I0317 10:26:31.200646 13120 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/kubelet.conf: No such file or directory
I0317 10:26:31.200701 13120 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
I0317 10:26:31.209477 13120 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
I0317 10:26:31.217762 13120 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/controller-manager.conf: No such file or directory
I0317 10:26:31.217828 13120 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
I0317 10:26:31.226444 13120 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
I0317 10:26:31.234935 13120 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/scheduler.conf: No such file or directory
I0317 10:26:31.234992 13120 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
I0317 10:26:31.243686 13120 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
I0317 10:26:31.301219 13120 kubeadm.go:310] [init] Using Kubernetes version: v1.32.2
I0317 10:26:31.301298 13120 kubeadm.go:310] [preflight] Running pre-flight checks
I0317 10:26:31.405484 13120 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
I0317 10:26:31.405627 13120 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
I0317 10:26:31.405762 13120 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
I0317 10:26:31.413840 13120 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
I0317 10:26:31.481419 13120 out.go:235] - Generating certificates and keys ...
I0317 10:26:31.481527 13120 kubeadm.go:310] [certs] Using existing ca certificate authority
I0317 10:26:31.481584 13120 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
I0317 10:26:31.768811 13120 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
I0317 10:26:31.932430 13120 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
I0317 10:26:32.083100 13120 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
I0317 10:26:32.221682 13120 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
I0317 10:26:32.313155 13120 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
I0317 10:26:32.313339 13120 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-415393 localhost] and IPs [192.168.39.132 127.0.0.1 ::1]
I0317 10:26:32.657806 13120 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
I0317 10:26:32.658003 13120 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-415393 localhost] and IPs [192.168.39.132 127.0.0.1 ::1]
I0317 10:26:32.727376 13120 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
I0317 10:26:32.799912 13120 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
I0317 10:26:32.921121 13120 kubeadm.go:310] [certs] Generating "sa" key and public key
I0317 10:26:32.921187 13120 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
I0317 10:26:33.023111 13120 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
I0317 10:26:33.093564 13120 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
I0317 10:26:33.176430 13120 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
I0317 10:26:33.283632 13120 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
I0317 10:26:33.443032 13120 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
I0317 10:26:33.443663 13120 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
I0317 10:26:33.446178 13120 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
I0317 10:26:33.558976 13120 out.go:235] - Booting up control plane ...
I0317 10:26:33.559140 13120 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
I0317 10:26:33.559268 13120 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
I0317 10:26:33.559359 13120 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
I0317 10:26:33.559495 13120 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
I0317 10:26:33.559611 13120 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
I0317 10:26:33.559673 13120 kubeadm.go:310] [kubelet-start] Starting the kubelet
I0317 10:26:33.600147 13120 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
I0317 10:26:33.600329 13120 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
I0317 10:26:34.101806 13120 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.805752ms
I0317 10:26:34.101907 13120 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
I0317 10:26:39.103075 13120 kubeadm.go:310] [api-check] The API server is healthy after 5.003044866s
I0317 10:26:39.113631 13120 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
I0317 10:26:39.130693 13120 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
I0317 10:26:39.156836 13120 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
I0317 10:26:39.157038 13120 kubeadm.go:310] [mark-control-plane] Marking the node addons-415393 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
I0317 10:26:39.171497 13120 kubeadm.go:310] [bootstrap-token] Using token: vu6ye3.8256fly4gtx00cky
I0317 10:26:39.172892 13120 out.go:235] - Configuring RBAC rules ...
I0317 10:26:39.173027 13120 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
I0317 10:26:39.178487 13120 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
I0317 10:26:39.188679 13120 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
I0317 10:26:39.192169 13120 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
I0317 10:26:39.196763 13120 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
I0317 10:26:39.199874 13120 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
I0317 10:26:39.507961 13120 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
I0317 10:26:39.939999 13120 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
I0317 10:26:40.507506 13120 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
I0317 10:26:40.507528 13120 kubeadm.go:310]
I0317 10:26:40.507599 13120 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
I0317 10:26:40.507638 13120 kubeadm.go:310]
I0317 10:26:40.507769 13120 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
I0317 10:26:40.507785 13120 kubeadm.go:310]
I0317 10:26:40.507824 13120 kubeadm.go:310] mkdir -p $HOME/.kube
I0317 10:26:40.507921 13120 kubeadm.go:310] sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
I0317 10:26:40.508010 13120 kubeadm.go:310] sudo chown $(id -u):$(id -g) $HOME/.kube/config
I0317 10:26:40.508030 13120 kubeadm.go:310]
I0317 10:26:40.508111 13120 kubeadm.go:310] Alternatively, if you are the root user, you can run:
I0317 10:26:40.508122 13120 kubeadm.go:310]
I0317 10:26:40.508193 13120 kubeadm.go:310] export KUBECONFIG=/etc/kubernetes/admin.conf
I0317 10:26:40.508212 13120 kubeadm.go:310]
I0317 10:26:40.508273 13120 kubeadm.go:310] You should now deploy a pod network to the cluster.
I0317 10:26:40.508363 13120 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
I0317 10:26:40.508459 13120 kubeadm.go:310] https://kubernetes.io/docs/concepts/cluster-administration/addons/
I0317 10:26:40.508475 13120 kubeadm.go:310]
I0317 10:26:40.508587 13120 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
I0317 10:26:40.508693 13120 kubeadm.go:310] and service account keys on each node and then running the following as root:
I0317 10:26:40.508703 13120 kubeadm.go:310]
I0317 10:26:40.508843 13120 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token vu6ye3.8256fly4gtx00cky \
I0317 10:26:40.508989 13120 kubeadm.go:310] --discovery-token-ca-cert-hash sha256:166b268af2a5603f0ec84abe6e6fc04d8919502d537d674d112f1bbfa403d58e \
I0317 10:26:40.509023 13120 kubeadm.go:310] --control-plane
I0317 10:26:40.509032 13120 kubeadm.go:310]
I0317 10:26:40.509154 13120 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
I0317 10:26:40.509164 13120 kubeadm.go:310]
I0317 10:26:40.509272 13120 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token vu6ye3.8256fly4gtx00cky \
I0317 10:26:40.509402 13120 kubeadm.go:310] --discovery-token-ca-cert-hash sha256:166b268af2a5603f0ec84abe6e6fc04d8919502d537d674d112f1bbfa403d58e
I0317 10:26:40.509832 13120 kubeadm.go:310] [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
I0317 10:26:40.509859 13120 cni.go:84] Creating CNI manager for ""
I0317 10:26:40.509869 13120 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
I0317 10:26:40.512252 13120 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
I0317 10:26:40.513518 13120 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
I0317 10:26:40.524844 13120 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
I0317 10:26:40.542008 13120 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
I0317 10:26:40.542137 13120 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-415393 minikube.k8s.io/updated_at=2025_03_17T10_26_40_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=28b3ce799b018a38b7c40f89b465976263272e76 minikube.k8s.io/name=addons-415393 minikube.k8s.io/primary=true
I0317 10:26:40.542143 13120 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
I0317 10:26:40.575653 13120 ops.go:34] apiserver oom_adj: -16
I0317 10:26:40.639539 13120 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0317 10:26:41.139560 13120 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0317 10:26:41.640056 13120 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0317 10:26:42.140637 13120 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0317 10:26:42.640416 13120 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0317 10:26:43.140285 13120 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0317 10:26:43.640210 13120 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0317 10:26:44.140162 13120 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0317 10:26:44.243927 13120 kubeadm.go:1113] duration metric: took 3.701861249s to wait for elevateKubeSystemPrivileges
I0317 10:26:44.243962 13120 kubeadm.go:394] duration metric: took 13.130679479s to StartCluster
I0317 10:26:44.243984 13120 settings.go:142] acquiring lock: {Name:mkb16e90057d62a39a82455b50e79ad6d8ecde61 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0317 10:26:44.244113 13120 settings.go:150] Updating kubeconfig: /home/jenkins/minikube-integration/20535-5255/kubeconfig
I0317 10:26:44.244528 13120 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20535-5255/kubeconfig: {Name:mkf0191c6db2de376acf7190727c7390c7a4ffcf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0317 10:26:44.244710 13120 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
I0317 10:26:44.244757 13120 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.132 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}
I0317 10:26:44.244821 13120 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
I0317 10:26:44.244943 13120 addons.go:69] Setting yakd=true in profile "addons-415393"
I0317 10:26:44.244964 13120 addons.go:238] Setting addon yakd=true in "addons-415393"
I0317 10:26:44.244969 13120 addons.go:69] Setting ingress-dns=true in profile "addons-415393"
I0317 10:26:44.244991 13120 addons.go:69] Setting amd-gpu-device-plugin=true in profile "addons-415393"
I0317 10:26:44.244996 13120 addons.go:238] Setting addon ingress-dns=true in "addons-415393"
I0317 10:26:44.245065 13120 addons.go:69] Setting volumesnapshots=true in profile "addons-415393"
I0317 10:26:44.245095 13120 addons.go:238] Setting addon volumesnapshots=true in "addons-415393"
I0317 10:26:44.245119 13120 host.go:66] Checking if "addons-415393" exists ...
I0317 10:26:44.244979 13120 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-415393"
I0317 10:26:44.245156 13120 addons.go:238] Setting addon csi-hostpath-driver=true in "addons-415393"
I0317 10:26:44.245120 13120 host.go:66] Checking if "addons-415393" exists ...
I0317 10:26:44.244961 13120 config.go:182] Loaded profile config "addons-415393": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
I0317 10:26:44.245006 13120 addons.go:238] Setting addon amd-gpu-device-plugin=true in "addons-415393"
I0317 10:26:44.245327 13120 host.go:66] Checking if "addons-415393" exists ...
I0317 10:26:44.245005 13120 addons.go:69] Setting registry=true in profile "addons-415393"
I0317 10:26:44.245429 13120 addons.go:238] Setting addon registry=true in "addons-415393"
I0317 10:26:44.245458 13120 host.go:66] Checking if "addons-415393" exists ...
I0317 10:26:44.245013 13120 addons.go:69] Setting storage-provisioner=true in profile "addons-415393"
I0317 10:26:44.245521 13120 addons.go:238] Setting addon storage-provisioner=true in "addons-415393"
I0317 10:26:44.245550 13120 host.go:66] Checking if "addons-415393" exists ...
I0317 10:26:44.245597 13120 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0317 10:26:44.245631 13120 main.go:141] libmachine: Launching plugin server for driver kvm2
I0317 10:26:44.245679 13120 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0317 10:26:44.245706 13120 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0317 10:26:44.245723 13120 main.go:141] libmachine: Launching plugin server for driver kvm2
I0317 10:26:44.245738 13120 main.go:141] libmachine: Launching plugin server for driver kvm2
I0317 10:26:44.245019 13120 addons.go:69] Setting cloud-spanner=true in profile "addons-415393"
I0317 10:26:44.245882 13120 addons.go:238] Setting addon cloud-spanner=true in "addons-415393"
I0317 10:26:44.245020 13120 addons.go:69] Setting inspektor-gadget=true in profile "addons-415393"
I0317 10:26:44.245886 13120 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0317 10:26:44.245896 13120 addons.go:238] Setting addon inspektor-gadget=true in "addons-415393"
I0317 10:26:44.245030 13120 addons.go:69] Setting metrics-server=true in profile "addons-415393"
I0317 10:26:44.245919 13120 main.go:141] libmachine: Launching plugin server for driver kvm2
I0317 10:26:44.245924 13120 addons.go:238] Setting addon metrics-server=true in "addons-415393"
I0317 10:26:44.245024 13120 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-415393"
I0317 10:26:44.246005 13120 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0317 10:26:44.246036 13120 main.go:141] libmachine: Launching plugin server for driver kvm2
I0317 10:26:44.246023 13120 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-415393"
I0317 10:26:44.245038 13120 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-415393"
I0317 10:26:44.246323 13120 addons.go:238] Setting addon nvidia-device-plugin=true in "addons-415393"
I0317 10:26:44.246347 13120 host.go:66] Checking if "addons-415393" exists ...
I0317 10:26:44.246390 13120 host.go:66] Checking if "addons-415393" exists ...
I0317 10:26:44.246393 13120 host.go:66] Checking if "addons-415393" exists ...
I0317 10:26:44.246438 13120 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0317 10:26:44.246473 13120 main.go:141] libmachine: Launching plugin server for driver kvm2
I0317 10:26:44.245034 13120 addons.go:69] Setting gcp-auth=true in profile "addons-415393"
I0317 10:26:44.246619 13120 mustload.go:65] Loading cluster: addons-415393
I0317 10:26:44.246690 13120 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0317 10:26:44.246714 13120 main.go:141] libmachine: Launching plugin server for driver kvm2
I0317 10:26:44.246733 13120 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0317 10:26:44.246739 13120 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0317 10:26:44.245040 13120 addons.go:69] Setting default-storageclass=true in profile "addons-415393"
I0317 10:26:44.245178 13120 host.go:66] Checking if "addons-415393" exists ...
I0317 10:26:44.245002 13120 host.go:66] Checking if "addons-415393" exists ...
I0317 10:26:44.245030 13120 addons.go:69] Setting volcano=true in profile "addons-415393"
I0317 10:26:44.245048 13120 addons.go:69] Setting ingress=true in profile "addons-415393"
I0317 10:26:44.246760 13120 main.go:141] libmachine: Launching plugin server for driver kvm2
I0317 10:26:44.246764 13120 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-415393"
I0317 10:26:44.246766 13120 main.go:141] libmachine: Launching plugin server for driver kvm2
I0317 10:26:44.246770 13120 addons.go:238] Setting addon ingress=true in "addons-415393"
I0317 10:26:44.247076 13120 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0317 10:26:44.247100 13120 main.go:141] libmachine: Launching plugin server for driver kvm2
I0317 10:26:44.247128 13120 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0317 10:26:44.247150 13120 main.go:141] libmachine: Launching plugin server for driver kvm2
I0317 10:26:44.247150 13120 addons.go:238] Setting addon volcano=true in "addons-415393"
I0317 10:26:44.247173 13120 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0317 10:26:44.247194 13120 main.go:141] libmachine: Launching plugin server for driver kvm2
I0317 10:26:44.247235 13120 host.go:66] Checking if "addons-415393" exists ...
I0317 10:26:44.247383 13120 host.go:66] Checking if "addons-415393" exists ...
I0317 10:26:44.247577 13120 config.go:182] Loaded profile config "addons-415393": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
I0317 10:26:44.247735 13120 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0317 10:26:44.247767 13120 host.go:66] Checking if "addons-415393" exists ...
I0317 10:26:44.247793 13120 main.go:141] libmachine: Launching plugin server for driver kvm2
I0317 10:26:44.247738 13120 out.go:177] * Verifying Kubernetes components...
I0317 10:26:44.248140 13120 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0317 10:26:44.248177 13120 main.go:141] libmachine: Launching plugin server for driver kvm2
I0317 10:26:44.256191 13120 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0317 10:26:44.267000 13120 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43425
I0317 10:26:44.267413 13120 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40667
I0317 10:26:44.267596 13120 main.go:141] libmachine: () Calling .GetVersion
I0317 10:26:44.267917 13120 main.go:141] libmachine: () Calling .GetVersion
I0317 10:26:44.268092 13120 main.go:141] libmachine: Using API Version 1
I0317 10:26:44.268111 13120 main.go:141] libmachine: () Calling .SetConfigRaw
I0317 10:26:44.268745 13120 main.go:141] libmachine: () Calling .GetMachineName
I0317 10:26:44.268804 13120 main.go:141] libmachine: Using API Version 1
I0317 10:26:44.268822 13120 main.go:141] libmachine: () Calling .SetConfigRaw
I0317 10:26:44.269249 13120 main.go:141] libmachine: () Calling .GetMachineName
I0317 10:26:44.269372 13120 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0317 10:26:44.269421 13120 main.go:141] libmachine: Launching plugin server for driver kvm2
I0317 10:26:44.269774 13120 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0317 10:26:44.269807 13120 main.go:141] libmachine: Launching plugin server for driver kvm2
I0317 10:26:44.269895 13120 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43369
I0317 10:26:44.272270 13120 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41001
I0317 10:26:44.272737 13120 main.go:141] libmachine: () Calling .GetVersion
I0317 10:26:44.273195 13120 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0317 10:26:44.273236 13120 main.go:141] libmachine: Launching plugin server for driver kvm2
I0317 10:26:44.273302 13120 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0317 10:26:44.273339 13120 main.go:141] libmachine: Launching plugin server for driver kvm2
I0317 10:26:44.273642 13120 main.go:141] libmachine: Using API Version 1
I0317 10:26:44.273661 13120 main.go:141] libmachine: () Calling .SetConfigRaw
I0317 10:26:44.273738 13120 main.go:141] libmachine: () Calling .GetVersion
I0317 10:26:44.280250 13120 main.go:141] libmachine: () Calling .GetMachineName
I0317 10:26:44.280465 13120 main.go:141] libmachine: Using API Version 1
I0317 10:26:44.280479 13120 main.go:141] libmachine: () Calling .SetConfigRaw
I0317 10:26:44.280534 13120 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39201
I0317 10:26:44.280891 13120 main.go:141] libmachine: () Calling .GetMachineName
I0317 10:26:44.281349 13120 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0317 10:26:44.281392 13120 main.go:141] libmachine: Launching plugin server for driver kvm2
I0317 10:26:44.287692 13120 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0317 10:26:44.287756 13120 main.go:141] libmachine: Launching plugin server for driver kvm2
I0317 10:26:44.288055 13120 main.go:141] libmachine: () Calling .GetVersion
I0317 10:26:44.288969 13120 main.go:141] libmachine: Using API Version 1
I0317 10:26:44.288991 13120 main.go:141] libmachine: () Calling .SetConfigRaw
I0317 10:26:44.289630 13120 main.go:141] libmachine: () Calling .GetMachineName
I0317 10:26:44.290178 13120 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0317 10:26:44.290248 13120 main.go:141] libmachine: Launching plugin server for driver kvm2
I0317 10:26:44.296849 13120 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37253
I0317 10:26:44.297281 13120 main.go:141] libmachine: () Calling .GetVersion
I0317 10:26:44.297710 13120 main.go:141] libmachine: Using API Version 1
I0317 10:26:44.297727 13120 main.go:141] libmachine: () Calling .SetConfigRaw
I0317 10:26:44.298061 13120 main.go:141] libmachine: () Calling .GetMachineName
I0317 10:26:44.298570 13120 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0317 10:26:44.298612 13120 main.go:141] libmachine: Launching plugin server for driver kvm2
I0317 10:26:44.299570 13120 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38889
I0317 10:26:44.301125 13120 main.go:141] libmachine: () Calling .GetVersion
I0317 10:26:44.301644 13120 main.go:141] libmachine: Using API Version 1
I0317 10:26:44.301658 13120 main.go:141] libmachine: () Calling .SetConfigRaw
I0317 10:26:44.302296 13120 main.go:141] libmachine: () Calling .GetMachineName
I0317 10:26:44.304248 13120 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0317 10:26:44.304288 13120 main.go:141] libmachine: Launching plugin server for driver kvm2
I0317 10:26:44.304831 13120 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34793
I0317 10:26:44.305552 13120 main.go:141] libmachine: () Calling .GetVersion
I0317 10:26:44.306127 13120 main.go:141] libmachine: Using API Version 1
I0317 10:26:44.306143 13120 main.go:141] libmachine: () Calling .SetConfigRaw
I0317 10:26:44.306601 13120 main.go:141] libmachine: () Calling .GetMachineName
I0317 10:26:44.307361 13120 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0317 10:26:44.307410 13120 main.go:141] libmachine: Launching plugin server for driver kvm2
I0317 10:26:44.313763 13120 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35073
I0317 10:26:44.314304 13120 main.go:141] libmachine: () Calling .GetVersion
I0317 10:26:44.314797 13120 main.go:141] libmachine: Using API Version 1
I0317 10:26:44.314817 13120 main.go:141] libmachine: () Calling .SetConfigRaw
I0317 10:26:44.315222 13120 main.go:141] libmachine: () Calling .GetMachineName
I0317 10:26:44.315418 13120 main.go:141] libmachine: (addons-415393) Calling .GetState
I0317 10:26:44.315622 13120 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37929
I0317 10:26:44.316043 13120 main.go:141] libmachine: () Calling .GetVersion
I0317 10:26:44.316447 13120 main.go:141] libmachine: Using API Version 1
I0317 10:26:44.316461 13120 main.go:141] libmachine: () Calling .SetConfigRaw
I0317 10:26:44.316807 13120 main.go:141] libmachine: () Calling .GetMachineName
I0317 10:26:44.317357 13120 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0317 10:26:44.317386 13120 main.go:141] libmachine: Launching plugin server for driver kvm2
I0317 10:26:44.317634 13120 main.go:141] libmachine: (addons-415393) Calling .DriverName
I0317 10:26:44.319155 13120 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44019
I0317 10:26:44.319298 13120 out.go:177] - Using image gcr.io/k8s-minikube/storage-provisioner:v5
I0317 10:26:44.319612 13120 main.go:141] libmachine: () Calling .GetVersion
I0317 10:26:44.320076 13120 main.go:141] libmachine: Using API Version 1
I0317 10:26:44.320094 13120 main.go:141] libmachine: () Calling .SetConfigRaw
I0317 10:26:44.320535 13120 main.go:141] libmachine: () Calling .GetMachineName
I0317 10:26:44.320748 13120 main.go:141] libmachine: (addons-415393) Calling .GetState
I0317 10:26:44.320886 13120 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
I0317 10:26:44.320904 13120 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
I0317 10:26:44.320920 13120 main.go:141] libmachine: (addons-415393) Calling .GetSSHHostname
I0317 10:26:44.323994 13120 main.go:141] libmachine: (addons-415393) DBG | domain addons-415393 has defined MAC address 52:54:00:e8:ea:9e in network mk-addons-415393
I0317 10:26:44.324460 13120 main.go:141] libmachine: (addons-415393) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:ea:9e", ip: ""} in network mk-addons-415393: {Iface:virbr1 ExpiryTime:2025-03-17 11:26:15 +0000 UTC Type:0 Mac:52:54:00:e8:ea:9e Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:addons-415393 Clientid:01:52:54:00:e8:ea:9e}
I0317 10:26:44.324490 13120 main.go:141] libmachine: (addons-415393) DBG | domain addons-415393 has defined IP address 192.168.39.132 and MAC address 52:54:00:e8:ea:9e in network mk-addons-415393
I0317 10:26:44.324621 13120 main.go:141] libmachine: (addons-415393) Calling .GetSSHPort
I0317 10:26:44.324814 13120 main.go:141] libmachine: (addons-415393) Calling .GetSSHKeyPath
I0317 10:26:44.324960 13120 main.go:141] libmachine: (addons-415393) Calling .GetSSHUsername
I0317 10:26:44.325089 13120 sshutil.go:53] new ssh client: &{IP:192.168.39.132 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20535-5255/.minikube/machines/addons-415393/id_rsa Username:docker}
I0317 10:26:44.326227 13120 addons.go:238] Setting addon storage-provisioner-rancher=true in "addons-415393"
I0317 10:26:44.326260 13120 host.go:66] Checking if "addons-415393" exists ...
I0317 10:26:44.326506 13120 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0317 10:26:44.326538 13120 main.go:141] libmachine: Launching plugin server for driver kvm2
I0317 10:26:44.327517 13120 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38955
I0317 10:26:44.328113 13120 main.go:141] libmachine: () Calling .GetVersion
I0317 10:26:44.328692 13120 main.go:141] libmachine: Using API Version 1
I0317 10:26:44.328725 13120 main.go:141] libmachine: () Calling .SetConfigRaw
I0317 10:26:44.329693 13120 main.go:141] libmachine: () Calling .GetMachineName
I0317 10:26:44.330306 13120 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0317 10:26:44.330352 13120 main.go:141] libmachine: Launching plugin server for driver kvm2
I0317 10:26:44.332342 13120 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33679
I0317 10:26:44.336818 13120 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37833
I0317 10:26:44.337010 13120 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43219
I0317 10:26:44.338207 13120 main.go:141] libmachine: () Calling .GetVersion
I0317 10:26:44.338724 13120 main.go:141] libmachine: () Calling .GetVersion
I0317 10:26:44.338910 13120 main.go:141] libmachine: Using API Version 1
I0317 10:26:44.338933 13120 main.go:141] libmachine: () Calling .SetConfigRaw
I0317 10:26:44.339266 13120 main.go:141] libmachine: () Calling .GetMachineName
I0317 10:26:44.339413 13120 main.go:141] libmachine: Using API Version 1
I0317 10:26:44.339436 13120 main.go:141] libmachine: () Calling .SetConfigRaw
I0317 10:26:44.339492 13120 main.go:141] libmachine: (addons-415393) Calling .GetState
I0317 10:26:44.340095 13120 main.go:141] libmachine: () Calling .GetMachineName
I0317 10:26:44.340625 13120 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0317 10:26:44.340668 13120 main.go:141] libmachine: Launching plugin server for driver kvm2
I0317 10:26:44.341356 13120 host.go:66] Checking if "addons-415393" exists ...
I0317 10:26:44.344118 13120 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45623
I0317 10:26:44.344794 13120 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45861
I0317 10:26:44.346507 13120 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37263
I0317 10:26:44.347008 13120 main.go:141] libmachine: () Calling .GetVersion
I0317 10:26:44.347922 13120 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43523
I0317 10:26:44.348399 13120 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44813
I0317 10:26:44.348406 13120 main.go:141] libmachine: () Calling .GetVersion
I0317 10:26:44.348680 13120 main.go:141] libmachine: Using API Version 1
I0317 10:26:44.348695 13120 main.go:141] libmachine: () Calling .SetConfigRaw
I0317 10:26:44.348881 13120 main.go:141] libmachine: Using API Version 1
I0317 10:26:44.348899 13120 main.go:141] libmachine: () Calling .SetConfigRaw
I0317 10:26:44.349174 13120 main.go:141] libmachine: () Calling .GetMachineName
I0317 10:26:44.349347 13120 main.go:141] libmachine: (addons-415393) Calling .GetState
I0317 10:26:44.349579 13120 main.go:141] libmachine: () Calling .GetMachineName
I0317 10:26:44.349632 13120 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41441
I0317 10:26:44.350047 13120 main.go:141] libmachine: () Calling .GetVersion
I0317 10:26:44.350298 13120 main.go:141] libmachine: (addons-415393) Calling .GetState
I0317 10:26:44.350353 13120 main.go:141] libmachine: () Calling .GetVersion
I0317 10:26:44.350862 13120 main.go:141] libmachine: Using API Version 1
I0317 10:26:44.350878 13120 main.go:141] libmachine: () Calling .SetConfigRaw
I0317 10:26:44.350939 13120 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44523
I0317 10:26:44.351465 13120 main.go:141] libmachine: () Calling .GetMachineName
I0317 10:26:44.351587 13120 main.go:141] libmachine: Using API Version 1
I0317 10:26:44.351598 13120 main.go:141] libmachine: () Calling .SetConfigRaw
I0317 10:26:44.351649 13120 main.go:141] libmachine: () Calling .GetVersion
I0317 10:26:44.351897 13120 main.go:141] libmachine: (addons-415393) Calling .GetState
I0317 10:26:44.352011 13120 main.go:141] libmachine: Using API Version 1
I0317 10:26:44.352021 13120 main.go:141] libmachine: () Calling .SetConfigRaw
I0317 10:26:44.352078 13120 main.go:141] libmachine: (addons-415393) Calling .DriverName
I0317 10:26:44.352682 13120 main.go:141] libmachine: (addons-415393) Calling .DriverName
I0317 10:26:44.352768 13120 main.go:141] libmachine: () Calling .GetMachineName
I0317 10:26:44.352803 13120 main.go:141] libmachine: () Calling .GetMachineName
I0317 10:26:44.352998 13120 main.go:141] libmachine: (addons-415393) Calling .GetState
I0317 10:26:44.353119 13120 main.go:141] libmachine: Making call to close driver server
I0317 10:26:44.353131 13120 main.go:141] libmachine: (addons-415393) Calling .Close
I0317 10:26:44.353357 13120 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0317 10:26:44.353397 13120 main.go:141] libmachine: Launching plugin server for driver kvm2
I0317 10:26:44.353454 13120 main.go:141] libmachine: (addons-415393) DBG | Closing plugin on server side
I0317 10:26:44.353478 13120 main.go:141] libmachine: Successfully made call to close driver server
I0317 10:26:44.353485 13120 main.go:141] libmachine: Making call to close connection to plugin binary
I0317 10:26:44.353493 13120 main.go:141] libmachine: Making call to close driver server
I0317 10:26:44.353499 13120 main.go:141] libmachine: (addons-415393) Calling .Close
I0317 10:26:44.353712 13120 main.go:141] libmachine: (addons-415393) DBG | Closing plugin on server side
I0317 10:26:44.353738 13120 main.go:141] libmachine: Successfully made call to close driver server
I0317 10:26:44.353745 13120 main.go:141] libmachine: Making call to close connection to plugin binary
W0317 10:26:44.353816 13120 out.go:270] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
I0317 10:26:44.354711 13120 out.go:177] - Using image nvcr.io/nvidia/k8s-device-plugin:v0.17.0
I0317 10:26:44.355120 13120 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38871
I0317 10:26:44.355153 13120 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37779
I0317 10:26:44.355639 13120 main.go:141] libmachine: () Calling .GetVersion
I0317 10:26:44.355730 13120 main.go:141] libmachine: () Calling .GetVersion
I0317 10:26:44.355816 13120 main.go:141] libmachine: (addons-415393) Calling .DriverName
I0317 10:26:44.356292 13120 main.go:141] libmachine: Using API Version 1
I0317 10:26:44.356307 13120 main.go:141] libmachine: () Calling .SetConfigRaw
I0317 10:26:44.356366 13120 main.go:141] libmachine: Using API Version 1
I0317 10:26:44.356371 13120 main.go:141] libmachine: (addons-415393) Calling .DriverName
I0317 10:26:44.356384 13120 main.go:141] libmachine: () Calling .SetConfigRaw
I0317 10:26:44.356619 13120 addons.go:435] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
I0317 10:26:44.356637 13120 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
I0317 10:26:44.356638 13120 main.go:141] libmachine: () Calling .GetMachineName
I0317 10:26:44.356656 13120 main.go:141] libmachine: (addons-415393) Calling .GetSSHHostname
I0317 10:26:44.357146 13120 main.go:141] libmachine: () Calling .GetMachineName
I0317 10:26:44.357211 13120 main.go:141] libmachine: (addons-415393) Calling .GetState
I0317 10:26:44.357348 13120 main.go:141] libmachine: (addons-415393) Calling .GetState
I0317 10:26:44.357950 13120 out.go:177] - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.8
I0317 10:26:44.358068 13120 out.go:177] - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.37.0
I0317 10:26:44.359286 13120 out.go:177] - Using image docker.io/registry:2.8.3
I0317 10:26:44.359373 13120 addons.go:435] installing /etc/kubernetes/addons/ig-crd.yaml
I0317 10:26:44.359385 13120 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5248 bytes)
I0317 10:26:44.359405 13120 main.go:141] libmachine: (addons-415393) Calling .GetSSHHostname
I0317 10:26:44.360343 13120 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40993
I0317 10:26:44.360366 13120 main.go:141] libmachine: (addons-415393) Calling .DriverName
I0317 10:26:44.360508 13120 addons.go:435] installing /etc/kubernetes/addons/registry-rc.yaml
I0317 10:26:44.360519 13120 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
I0317 10:26:44.360534 13120 main.go:141] libmachine: (addons-415393) Calling .GetSSHHostname
I0317 10:26:44.361375 13120 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0317 10:26:44.361415 13120 main.go:141] libmachine: Launching plugin server for driver kvm2
I0317 10:26:44.361682 13120 main.go:141] libmachine: (addons-415393) Calling .GetSSHPort
I0317 10:26:44.361739 13120 main.go:141] libmachine: (addons-415393) Calling .DriverName
I0317 10:26:44.361757 13120 main.go:141] libmachine: (addons-415393) DBG | domain addons-415393 has defined MAC address 52:54:00:e8:ea:9e in network mk-addons-415393
I0317 10:26:44.361776 13120 main.go:141] libmachine: (addons-415393) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:ea:9e", ip: ""} in network mk-addons-415393: {Iface:virbr1 ExpiryTime:2025-03-17 11:26:15 +0000 UTC Type:0 Mac:52:54:00:e8:ea:9e Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:addons-415393 Clientid:01:52:54:00:e8:ea:9e}
I0317 10:26:44.361793 13120 main.go:141] libmachine: (addons-415393) DBG | domain addons-415393 has defined IP address 192.168.39.132 and MAC address 52:54:00:e8:ea:9e in network mk-addons-415393
I0317 10:26:44.361825 13120 main.go:141] libmachine: () Calling .GetVersion
I0317 10:26:44.361901 13120 main.go:141] libmachine: (addons-415393) Calling .GetSSHKeyPath
I0317 10:26:44.362149 13120 main.go:141] libmachine: () Calling .GetVersion
I0317 10:26:44.362209 13120 main.go:141] libmachine: (addons-415393) Calling .GetSSHUsername
I0317 10:26:44.362247 13120 main.go:141] libmachine: () Calling .GetVersion
I0317 10:26:44.362310 13120 main.go:141] libmachine: () Calling .GetVersion
I0317 10:26:44.362328 13120 sshutil.go:53] new ssh client: &{IP:192.168.39.132 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20535-5255/.minikube/machines/addons-415393/id_rsa Username:docker}
I0317 10:26:44.362435 13120 main.go:141] libmachine: Using API Version 1
I0317 10:26:44.362443 13120 main.go:141] libmachine: () Calling .SetConfigRaw
I0317 10:26:44.362623 13120 out.go:177] - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
I0317 10:26:44.363045 13120 main.go:141] libmachine: Using API Version 1
I0317 10:26:44.363065 13120 main.go:141] libmachine: () Calling .SetConfigRaw
I0317 10:26:44.363045 13120 main.go:141] libmachine: () Calling .GetMachineName
I0317 10:26:44.363161 13120 main.go:141] libmachine: Using API Version 1
I0317 10:26:44.363174 13120 main.go:141] libmachine: () Calling .SetConfigRaw
I0317 10:26:44.363314 13120 main.go:141] libmachine: (addons-415393) Calling .GetState
I0317 10:26:44.363725 13120 main.go:141] libmachine: () Calling .GetMachineName
I0317 10:26:44.363756 13120 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
I0317 10:26:44.363769 13120 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
I0317 10:26:44.363782 13120 out.go:177] - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
I0317 10:26:44.363813 13120 main.go:141] libmachine: Using API Version 1
I0317 10:26:44.363822 13120 main.go:141] libmachine: () Calling .SetConfigRaw
I0317 10:26:44.363824 13120 main.go:141] libmachine: () Calling .GetMachineName
I0317 10:26:44.363785 13120 main.go:141] libmachine: (addons-415393) Calling .GetSSHHostname
I0317 10:26:44.364104 13120 main.go:141] libmachine: (addons-415393) Calling .GetState
I0317 10:26:44.364689 13120 main.go:141] libmachine: () Calling .GetMachineName
I0317 10:26:44.364918 13120 main.go:141] libmachine: (addons-415393) Calling .GetState
I0317 10:26:44.365417 13120 addons.go:435] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
I0317 10:26:44.365436 13120 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
I0317 10:26:44.365452 13120 main.go:141] libmachine: (addons-415393) Calling .GetSSHHostname
I0317 10:26:44.366250 13120 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0317 10:26:44.366387 13120 main.go:141] libmachine: Launching plugin server for driver kvm2
I0317 10:26:44.367282 13120 main.go:141] libmachine: (addons-415393) DBG | domain addons-415393 has defined MAC address 52:54:00:e8:ea:9e in network mk-addons-415393
I0317 10:26:44.367327 13120 main.go:141] libmachine: (addons-415393) Calling .DriverName
I0317 10:26:44.367383 13120 main.go:141] libmachine: (addons-415393) Calling .DriverName
I0317 10:26:44.368189 13120 main.go:141] libmachine: (addons-415393) DBG | domain addons-415393 has defined MAC address 52:54:00:e8:ea:9e in network mk-addons-415393
I0317 10:26:44.368230 13120 main.go:141] libmachine: (addons-415393) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:ea:9e", ip: ""} in network mk-addons-415393: {Iface:virbr1 ExpiryTime:2025-03-17 11:26:15 +0000 UTC Type:0 Mac:52:54:00:e8:ea:9e Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:addons-415393 Clientid:01:52:54:00:e8:ea:9e}
I0317 10:26:44.368251 13120 main.go:141] libmachine: (addons-415393) DBG | domain addons-415393 has defined IP address 192.168.39.132 and MAC address 52:54:00:e8:ea:9e in network mk-addons-415393
I0317 10:26:44.368448 13120 main.go:141] libmachine: (addons-415393) Calling .GetSSHPort
I0317 10:26:44.368655 13120 main.go:141] libmachine: (addons-415393) Calling .GetSSHKeyPath
I0317 10:26:44.368793 13120 main.go:141] libmachine: (addons-415393) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:ea:9e", ip: ""} in network mk-addons-415393: {Iface:virbr1 ExpiryTime:2025-03-17 11:26:15 +0000 UTC Type:0 Mac:52:54:00:e8:ea:9e Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:addons-415393 Clientid:01:52:54:00:e8:ea:9e}
I0317 10:26:44.368811 13120 main.go:141] libmachine: (addons-415393) DBG | domain addons-415393 has defined IP address 192.168.39.132 and MAC address 52:54:00:e8:ea:9e in network mk-addons-415393
I0317 10:26:44.369015 13120 main.go:141] libmachine: (addons-415393) Calling .GetSSHPort
I0317 10:26:44.369211 13120 main.go:141] libmachine: (addons-415393) Calling .GetSSHKeyPath
I0317 10:26:44.369219 13120 out.go:177] - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.29
I0317 10:26:44.369264 13120 out.go:177] - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
I0317 10:26:44.369321 13120 main.go:141] libmachine: (addons-415393) Calling .GetSSHUsername
I0317 10:26:44.369456 13120 main.go:141] libmachine: (addons-415393) Calling .GetSSHUsername
I0317 10:26:44.369512 13120 sshutil.go:53] new ssh client: &{IP:192.168.39.132 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20535-5255/.minikube/machines/addons-415393/id_rsa Username:docker}
I0317 10:26:44.369680 13120 sshutil.go:53] new ssh client: &{IP:192.168.39.132 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20535-5255/.minikube/machines/addons-415393/id_rsa Username:docker}
I0317 10:26:44.370510 13120 addons.go:435] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
I0317 10:26:44.370536 13120 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
I0317 10:26:44.370555 13120 main.go:141] libmachine: (addons-415393) Calling .GetSSHHostname
I0317 10:26:44.370683 13120 addons.go:435] installing /etc/kubernetes/addons/deployment.yaml
I0317 10:26:44.370695 13120 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
I0317 10:26:44.370707 13120 main.go:141] libmachine: (addons-415393) Calling .GetSSHHostname
I0317 10:26:44.372837 13120 main.go:141] libmachine: (addons-415393) Calling .GetSSHPort
I0317 10:26:44.372909 13120 main.go:141] libmachine: (addons-415393) DBG | domain addons-415393 has defined MAC address 52:54:00:e8:ea:9e in network mk-addons-415393
I0317 10:26:44.372929 13120 main.go:141] libmachine: (addons-415393) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:ea:9e", ip: ""} in network mk-addons-415393: {Iface:virbr1 ExpiryTime:2025-03-17 11:26:15 +0000 UTC Type:0 Mac:52:54:00:e8:ea:9e Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:addons-415393 Clientid:01:52:54:00:e8:ea:9e}
I0317 10:26:44.372946 13120 main.go:141] libmachine: (addons-415393) DBG | domain addons-415393 has defined IP address 192.168.39.132 and MAC address 52:54:00:e8:ea:9e in network mk-addons-415393
I0317 10:26:44.373803 13120 addons.go:238] Setting addon default-storageclass=true in "addons-415393"
I0317 10:26:44.373853 13120 host.go:66] Checking if "addons-415393" exists ...
I0317 10:26:44.374215 13120 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0317 10:26:44.374256 13120 main.go:141] libmachine: Launching plugin server for driver kvm2
I0317 10:26:44.374612 13120 main.go:141] libmachine: (addons-415393) DBG | domain addons-415393 has defined MAC address 52:54:00:e8:ea:9e in network mk-addons-415393
I0317 10:26:44.374648 13120 main.go:141] libmachine: (addons-415393) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:ea:9e", ip: ""} in network mk-addons-415393: {Iface:virbr1 ExpiryTime:2025-03-17 11:26:15 +0000 UTC Type:0 Mac:52:54:00:e8:ea:9e Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:addons-415393 Clientid:01:52:54:00:e8:ea:9e}
I0317 10:26:44.374673 13120 main.go:141] libmachine: (addons-415393) DBG | domain addons-415393 has defined IP address 192.168.39.132 and MAC address 52:54:00:e8:ea:9e in network mk-addons-415393
I0317 10:26:44.374801 13120 main.go:141] libmachine: (addons-415393) DBG | domain addons-415393 has defined MAC address 52:54:00:e8:ea:9e in network mk-addons-415393
I0317 10:26:44.374832 13120 main.go:141] libmachine: (addons-415393) Calling .GetSSHKeyPath
I0317 10:26:44.374885 13120 main.go:141] libmachine: (addons-415393) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:ea:9e", ip: ""} in network mk-addons-415393: {Iface:virbr1 ExpiryTime:2025-03-17 11:26:15 +0000 UTC Type:0 Mac:52:54:00:e8:ea:9e Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:addons-415393 Clientid:01:52:54:00:e8:ea:9e}
I0317 10:26:44.374917 13120 main.go:141] libmachine: (addons-415393) DBG | domain addons-415393 has defined IP address 192.168.39.132 and MAC address 52:54:00:e8:ea:9e in network mk-addons-415393
I0317 10:26:44.374937 13120 main.go:141] libmachine: (addons-415393) DBG | domain addons-415393 has defined MAC address 52:54:00:e8:ea:9e in network mk-addons-415393
I0317 10:26:44.375051 13120 main.go:141] libmachine: (addons-415393) Calling .GetSSHUsername
I0317 10:26:44.375161 13120 main.go:141] libmachine: (addons-415393) Calling .GetSSHPort
I0317 10:26:44.375322 13120 main.go:141] libmachine: (addons-415393) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:ea:9e", ip: ""} in network mk-addons-415393: {Iface:virbr1 ExpiryTime:2025-03-17 11:26:15 +0000 UTC Type:0 Mac:52:54:00:e8:ea:9e Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:addons-415393 Clientid:01:52:54:00:e8:ea:9e}
I0317 10:26:44.375352 13120 main.go:141] libmachine: (addons-415393) DBG | domain addons-415393 has defined IP address 192.168.39.132 and MAC address 52:54:00:e8:ea:9e in network mk-addons-415393
I0317 10:26:44.375363 13120 main.go:141] libmachine: (addons-415393) Calling .GetSSHPort
I0317 10:26:44.375353 13120 sshutil.go:53] new ssh client: &{IP:192.168.39.132 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20535-5255/.minikube/machines/addons-415393/id_rsa Username:docker}
I0317 10:26:44.375481 13120 main.go:141] libmachine: (addons-415393) Calling .GetSSHPort
I0317 10:26:44.375501 13120 main.go:141] libmachine: (addons-415393) Calling .GetSSHKeyPath
I0317 10:26:44.375655 13120 main.go:141] libmachine: (addons-415393) Calling .GetSSHKeyPath
I0317 10:26:44.375662 13120 main.go:141] libmachine: (addons-415393) Calling .GetSSHUsername
I0317 10:26:44.375738 13120 main.go:141] libmachine: (addons-415393) Calling .GetSSHKeyPath
I0317 10:26:44.375797 13120 main.go:141] libmachine: (addons-415393) Calling .GetSSHUsername
I0317 10:26:44.375806 13120 sshutil.go:53] new ssh client: &{IP:192.168.39.132 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20535-5255/.minikube/machines/addons-415393/id_rsa Username:docker}
I0317 10:26:44.375894 13120 main.go:141] libmachine: (addons-415393) Calling .GetSSHUsername
I0317 10:26:44.376006 13120 sshutil.go:53] new ssh client: &{IP:192.168.39.132 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20535-5255/.minikube/machines/addons-415393/id_rsa Username:docker}
I0317 10:26:44.376223 13120 sshutil.go:53] new ssh client: &{IP:192.168.39.132 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20535-5255/.minikube/machines/addons-415393/id_rsa Username:docker}
I0317 10:26:44.389164 13120 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40105
I0317 10:26:44.389997 13120 main.go:141] libmachine: () Calling .GetVersion
I0317 10:26:44.390582 13120 main.go:141] libmachine: Using API Version 1
I0317 10:26:44.390603 13120 main.go:141] libmachine: () Calling .SetConfigRaw
I0317 10:26:44.391008 13120 main.go:141] libmachine: () Calling .GetMachineName
I0317 10:26:44.391583 13120 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0317 10:26:44.391630 13120 main.go:141] libmachine: Launching plugin server for driver kvm2
I0317 10:26:44.393200 13120 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40017
I0317 10:26:44.393352 13120 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38791
I0317 10:26:44.393536 13120 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40553
I0317 10:26:44.394061 13120 main.go:141] libmachine: () Calling .GetVersion
I0317 10:26:44.394110 13120 main.go:141] libmachine: () Calling .GetVersion
I0317 10:26:44.394120 13120 main.go:141] libmachine: () Calling .GetVersion
I0317 10:26:44.394683 13120 main.go:141] libmachine: Using API Version 1
I0317 10:26:44.394704 13120 main.go:141] libmachine: () Calling .SetConfigRaw
I0317 10:26:44.394823 13120 main.go:141] libmachine: Using API Version 1
I0317 10:26:44.394838 13120 main.go:141] libmachine: () Calling .SetConfigRaw
I0317 10:26:44.394952 13120 main.go:141] libmachine: Using API Version 1
I0317 10:26:44.394964 13120 main.go:141] libmachine: () Calling .SetConfigRaw
I0317 10:26:44.395190 13120 main.go:141] libmachine: () Calling .GetMachineName
I0317 10:26:44.395246 13120 main.go:141] libmachine: () Calling .GetMachineName
I0317 10:26:44.395274 13120 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45003
I0317 10:26:44.395701 13120 main.go:141] libmachine: (addons-415393) Calling .GetState
I0317 10:26:44.395740 13120 main.go:141] libmachine: () Calling .GetMachineName
I0317 10:26:44.395978 13120 main.go:141] libmachine: (addons-415393) Calling .GetState
I0317 10:26:44.396026 13120 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0317 10:26:44.396040 13120 main.go:141] libmachine: () Calling .GetVersion
I0317 10:26:44.396068 13120 main.go:141] libmachine: Launching plugin server for driver kvm2
I0317 10:26:44.396454 13120 main.go:141] libmachine: Using API Version 1
I0317 10:26:44.396470 13120 main.go:141] libmachine: () Calling .SetConfigRaw
I0317 10:26:44.396866 13120 main.go:141] libmachine: () Calling .GetMachineName
I0317 10:26:44.397045 13120 main.go:141] libmachine: (addons-415393) Calling .DriverName
I0317 10:26:44.397293 13120 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33479
I0317 10:26:44.397681 13120 main.go:141] libmachine: () Calling .GetVersion
I0317 10:26:44.398044 13120 main.go:141] libmachine: Using API Version 1
I0317 10:26:44.398064 13120 main.go:141] libmachine: () Calling .SetConfigRaw
I0317 10:26:44.398977 13120 main.go:141] libmachine: () Calling .GetMachineName
I0317 10:26:44.399120 13120 main.go:141] libmachine: (addons-415393) Calling .GetState
I0317 10:26:44.399494 13120 main.go:141] libmachine: (addons-415393) Calling .DriverName
I0317 10:26:44.400077 13120 main.go:141] libmachine: (addons-415393) Calling .DriverName
I0317 10:26:44.401037 13120 main.go:141] libmachine: (addons-415393) Calling .DriverName
I0317 10:26:44.401402 13120 out.go:177] - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
I0317 10:26:44.401930 13120 out.go:177] - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
I0317 10:26:44.402593 13120 out.go:177] - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
I0317 10:26:44.403206 13120 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46035
I0317 10:26:44.403586 13120 main.go:141] libmachine: () Calling .GetVersion
I0317 10:26:44.403850 13120 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
I0317 10:26:44.403857 13120 out.go:177] - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
I0317 10:26:44.403878 13120 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
I0317 10:26:44.403900 13120 main.go:141] libmachine: (addons-415393) Calling .GetSSHHostname
I0317 10:26:44.404054 13120 main.go:141] libmachine: Using API Version 1
I0317 10:26:44.404076 13120 main.go:141] libmachine: () Calling .SetConfigRaw
I0317 10:26:44.404515 13120 main.go:141] libmachine: () Calling .GetMachineName
I0317 10:26:44.404706 13120 main.go:141] libmachine: (addons-415393) Calling .GetState
I0317 10:26:44.405154 13120 out.go:177] - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
I0317 10:26:44.406397 13120 main.go:141] libmachine: (addons-415393) Calling .DriverName
I0317 10:26:44.406459 13120 out.go:177] - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
I0317 10:26:44.407605 13120 main.go:141] libmachine: (addons-415393) DBG | domain addons-415393 has defined MAC address 52:54:00:e8:ea:9e in network mk-addons-415393
I0317 10:26:44.407740 13120 out.go:177] - Using image registry.k8s.io/ingress-nginx/controller:v1.11.3
I0317 10:26:44.407726 13120 out.go:177] - Using image docker.io/marcnuri/yakd:0.0.5
I0317 10:26:44.408010 13120 main.go:141] libmachine: (addons-415393) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:ea:9e", ip: ""} in network mk-addons-415393: {Iface:virbr1 ExpiryTime:2025-03-17 11:26:15 +0000 UTC Type:0 Mac:52:54:00:e8:ea:9e Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:addons-415393 Clientid:01:52:54:00:e8:ea:9e}
I0317 10:26:44.408159 13120 main.go:141] libmachine: (addons-415393) Calling .GetSSHPort
I0317 10:26:44.408707 13120 main.go:141] libmachine: (addons-415393) DBG | domain addons-415393 has defined IP address 192.168.39.132 and MAC address 52:54:00:e8:ea:9e in network mk-addons-415393
I0317 10:26:44.408915 13120 main.go:141] libmachine: (addons-415393) Calling .GetSSHKeyPath
I0317 10:26:44.409060 13120 out.go:177] - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
I0317 10:26:44.409553 13120 addons.go:435] installing /etc/kubernetes/addons/ingress-deploy.yaml
I0317 10:26:44.409571 13120 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
I0317 10:26:44.409588 13120 main.go:141] libmachine: (addons-415393) Calling .GetSSHHostname
I0317 10:26:44.409075 13120 main.go:141] libmachine: (addons-415393) Calling .GetSSHUsername
I0317 10:26:44.409797 13120 addons.go:435] installing /etc/kubernetes/addons/yakd-ns.yaml
I0317 10:26:44.409827 13120 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
I0317 10:26:44.409848 13120 main.go:141] libmachine: (addons-415393) Calling .GetSSHHostname
I0317 10:26:44.410221 13120 sshutil.go:53] new ssh client: &{IP:192.168.39.132 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20535-5255/.minikube/machines/addons-415393/id_rsa Username:docker}
I0317 10:26:44.412962 13120 main.go:141] libmachine: (addons-415393) DBG | domain addons-415393 has defined MAC address 52:54:00:e8:ea:9e in network mk-addons-415393
I0317 10:26:44.413348 13120 main.go:141] libmachine: (addons-415393) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:ea:9e", ip: ""} in network mk-addons-415393: {Iface:virbr1 ExpiryTime:2025-03-17 11:26:15 +0000 UTC Type:0 Mac:52:54:00:e8:ea:9e Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:addons-415393 Clientid:01:52:54:00:e8:ea:9e}
I0317 10:26:44.413369 13120 main.go:141] libmachine: (addons-415393) DBG | domain addons-415393 has defined IP address 192.168.39.132 and MAC address 52:54:00:e8:ea:9e in network mk-addons-415393
I0317 10:26:44.413505 13120 main.go:141] libmachine: (addons-415393) Calling .GetSSHPort
I0317 10:26:44.413552 13120 out.go:177] - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
I0317 10:26:44.413653 13120 main.go:141] libmachine: (addons-415393) Calling .GetSSHKeyPath
I0317 10:26:44.413754 13120 main.go:141] libmachine: (addons-415393) Calling .GetSSHUsername
I0317 10:26:44.413808 13120 main.go:141] libmachine: (addons-415393) DBG | domain addons-415393 has defined MAC address 52:54:00:e8:ea:9e in network mk-addons-415393
I0317 10:26:44.413961 13120 sshutil.go:53] new ssh client: &{IP:192.168.39.132 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20535-5255/.minikube/machines/addons-415393/id_rsa Username:docker}
I0317 10:26:44.414367 13120 main.go:141] libmachine: (addons-415393) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:ea:9e", ip: ""} in network mk-addons-415393: {Iface:virbr1 ExpiryTime:2025-03-17 11:26:15 +0000 UTC Type:0 Mac:52:54:00:e8:ea:9e Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:addons-415393 Clientid:01:52:54:00:e8:ea:9e}
I0317 10:26:44.414385 13120 main.go:141] libmachine: (addons-415393) DBG | domain addons-415393 has defined IP address 192.168.39.132 and MAC address 52:54:00:e8:ea:9e in network mk-addons-415393
I0317 10:26:44.414416 13120 main.go:141] libmachine: (addons-415393) Calling .GetSSHPort
I0317 10:26:44.414555 13120 main.go:141] libmachine: (addons-415393) Calling .GetSSHKeyPath
I0317 10:26:44.414710 13120 main.go:141] libmachine: (addons-415393) Calling .GetSSHUsername
I0317 10:26:44.414890 13120 sshutil.go:53] new ssh client: &{IP:192.168.39.132 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20535-5255/.minikube/machines/addons-415393/id_rsa Username:docker}
I0317 10:26:44.415833 13120 out.go:177] - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
I0317 10:26:44.417142 13120 out.go:177] - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
I0317 10:26:44.418289 13120 out.go:177] - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
I0317 10:26:44.418907 13120 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37045
I0317 10:26:44.419314 13120 addons.go:435] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
I0317 10:26:44.419332 13120 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
I0317 10:26:44.419347 13120 main.go:141] libmachine: (addons-415393) Calling .GetSSHHostname
I0317 10:26:44.422087 13120 main.go:141] libmachine: () Calling .GetVersion
I0317 10:26:44.422836 13120 main.go:141] libmachine: Using API Version 1
I0317 10:26:44.422857 13120 main.go:141] libmachine: () Calling .SetConfigRaw
I0317 10:26:44.423354 13120 main.go:141] libmachine: () Calling .GetMachineName
I0317 10:26:44.423520 13120 main.go:141] libmachine: (addons-415393) Calling .GetState
I0317 10:26:44.425556 13120 main.go:141] libmachine: (addons-415393) Calling .DriverName
I0317 10:26:44.425572 13120 main.go:141] libmachine: (addons-415393) DBG | domain addons-415393 has defined MAC address 52:54:00:e8:ea:9e in network mk-addons-415393
I0317 10:26:44.426318 13120 main.go:141] libmachine: (addons-415393) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:ea:9e", ip: ""} in network mk-addons-415393: {Iface:virbr1 ExpiryTime:2025-03-17 11:26:15 +0000 UTC Type:0 Mac:52:54:00:e8:ea:9e Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:addons-415393 Clientid:01:52:54:00:e8:ea:9e}
I0317 10:26:44.426349 13120 main.go:141] libmachine: (addons-415393) DBG | domain addons-415393 has defined IP address 192.168.39.132 and MAC address 52:54:00:e8:ea:9e in network mk-addons-415393
I0317 10:26:44.426392 13120 main.go:141] libmachine: (addons-415393) Calling .GetSSHPort
I0317 10:26:44.426548 13120 main.go:141] libmachine: (addons-415393) Calling .GetSSHKeyPath
I0317 10:26:44.426604 13120 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33087
I0317 10:26:44.426751 13120 main.go:141] libmachine: (addons-415393) Calling .GetSSHUsername
I0317 10:26:44.426880 13120 sshutil.go:53] new ssh client: &{IP:192.168.39.132 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20535-5255/.minikube/machines/addons-415393/id_rsa Username:docker}
I0317 10:26:44.426955 13120 main.go:141] libmachine: () Calling .GetVersion
I0317 10:26:44.427068 13120 out.go:177] - Using image docker.io/rancher/local-path-provisioner:v0.0.22
I0317 10:26:44.427350 13120 main.go:141] libmachine: Using API Version 1
I0317 10:26:44.427364 13120 main.go:141] libmachine: () Calling .SetConfigRaw
I0317 10:26:44.427662 13120 main.go:141] libmachine: () Calling .GetMachineName
I0317 10:26:44.427827 13120 main.go:141] libmachine: (addons-415393) Calling .GetState
I0317 10:26:44.429233 13120 out.go:177] - Using image docker.io/busybox:stable
I0317 10:26:44.429249 13120 main.go:141] libmachine: (addons-415393) Calling .DriverName
I0317 10:26:44.429438 13120 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
I0317 10:26:44.429449 13120 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
I0317 10:26:44.429460 13120 main.go:141] libmachine: (addons-415393) Calling .GetSSHHostname
I0317 10:26:44.430459 13120 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
I0317 10:26:44.430474 13120 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
I0317 10:26:44.430490 13120 main.go:141] libmachine: (addons-415393) Calling .GetSSHHostname
I0317 10:26:44.432405 13120 main.go:141] libmachine: (addons-415393) DBG | domain addons-415393 has defined MAC address 52:54:00:e8:ea:9e in network mk-addons-415393
I0317 10:26:44.432741 13120 main.go:141] libmachine: (addons-415393) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:ea:9e", ip: ""} in network mk-addons-415393: {Iface:virbr1 ExpiryTime:2025-03-17 11:26:15 +0000 UTC Type:0 Mac:52:54:00:e8:ea:9e Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:addons-415393 Clientid:01:52:54:00:e8:ea:9e}
I0317 10:26:44.432788 13120 main.go:141] libmachine: (addons-415393) DBG | domain addons-415393 has defined IP address 192.168.39.132 and MAC address 52:54:00:e8:ea:9e in network mk-addons-415393
I0317 10:26:44.433081 13120 main.go:141] libmachine: (addons-415393) Calling .GetSSHPort
I0317 10:26:44.433292 13120 main.go:141] libmachine: (addons-415393) Calling .GetSSHKeyPath
I0317 10:26:44.433455 13120 main.go:141] libmachine: (addons-415393) Calling .GetSSHUsername
I0317 10:26:44.433640 13120 sshutil.go:53] new ssh client: &{IP:192.168.39.132 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20535-5255/.minikube/machines/addons-415393/id_rsa Username:docker}
I0317 10:26:44.434933 13120 main.go:141] libmachine: (addons-415393) DBG | domain addons-415393 has defined MAC address 52:54:00:e8:ea:9e in network mk-addons-415393
I0317 10:26:44.435324 13120 main.go:141] libmachine: (addons-415393) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:ea:9e", ip: ""} in network mk-addons-415393: {Iface:virbr1 ExpiryTime:2025-03-17 11:26:15 +0000 UTC Type:0 Mac:52:54:00:e8:ea:9e Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:addons-415393 Clientid:01:52:54:00:e8:ea:9e}
I0317 10:26:44.435415 13120 main.go:141] libmachine: (addons-415393) DBG | domain addons-415393 has defined IP address 192.168.39.132 and MAC address 52:54:00:e8:ea:9e in network mk-addons-415393
I0317 10:26:44.435612 13120 main.go:141] libmachine: (addons-415393) Calling .GetSSHPort
I0317 10:26:44.435804 13120 main.go:141] libmachine: (addons-415393) Calling .GetSSHKeyPath
I0317 10:26:44.435976 13120 main.go:141] libmachine: (addons-415393) Calling .GetSSHUsername
I0317 10:26:44.436125 13120 sshutil.go:53] new ssh client: &{IP:192.168.39.132 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20535-5255/.minikube/machines/addons-415393/id_rsa Username:docker}
I0317 10:26:44.760430 13120 addons.go:435] installing /etc/kubernetes/addons/registry-svc.yaml
I0317 10:26:44.760457 13120 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
I0317 10:26:44.787095 13120 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
I0317 10:26:44.833470 13120 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
I0317 10:26:44.861218 13120 addons.go:435] installing /etc/kubernetes/addons/yakd-sa.yaml
I0317 10:26:44.861242 13120 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
I0317 10:26:44.923697 13120 addons.go:435] installing /etc/kubernetes/addons/registry-proxy.yaml
I0317 10:26:44.923721 13120 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
I0317 10:26:44.924664 13120 ssh_runner.go:195] Run: sudo systemctl start kubelet
I0317 10:26:44.924871 13120 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^ forward . \/etc\/resolv.conf.*/i \ hosts {\n 192.168.39.1 host.minikube.internal\n fallthrough\n }' -e '/^ errors *$/i \ log' | sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
I0317 10:26:44.953341 13120 addons.go:435] installing /etc/kubernetes/addons/ig-deployment.yaml
I0317 10:26:44.953377 13120 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (14539 bytes)
I0317 10:26:44.960712 13120 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
I0317 10:26:44.960749 13120 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
I0317 10:26:44.984680 13120 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
I0317 10:26:45.029753 13120 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
I0317 10:26:45.030503 13120 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
I0317 10:26:45.030526 13120 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
I0317 10:26:45.035769 13120 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
I0317 10:26:45.040679 13120 addons.go:435] installing /etc/kubernetes/addons/rbac-hostpath.yaml
I0317 10:26:45.040708 13120 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
I0317 10:26:45.056074 13120 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
I0317 10:26:45.073579 13120 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
I0317 10:26:45.081812 13120 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
I0317 10:26:45.108540 13120 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
I0317 10:26:45.125255 13120 addons.go:435] installing /etc/kubernetes/addons/yakd-crb.yaml
I0317 10:26:45.125297 13120 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
I0317 10:26:45.145042 13120 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
I0317 10:26:45.179906 13120 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
I0317 10:26:45.179934 13120 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
I0317 10:26:45.238239 13120 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
I0317 10:26:45.238264 13120 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
I0317 10:26:45.239044 13120 addons.go:435] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
I0317 10:26:45.239067 13120 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
I0317 10:26:45.311349 13120 addons.go:435] installing /etc/kubernetes/addons/yakd-svc.yaml
I0317 10:26:45.311383 13120 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
I0317 10:26:45.364319 13120 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
I0317 10:26:45.364347 13120 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
I0317 10:26:45.414891 13120 addons.go:435] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
I0317 10:26:45.414919 13120 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
I0317 10:26:45.444291 13120 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
I0317 10:26:45.444318 13120 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
I0317 10:26:45.590351 13120 addons.go:435] installing /etc/kubernetes/addons/yakd-dp.yaml
I0317 10:26:45.590377 13120 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
I0317 10:26:45.617995 13120 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
I0317 10:26:45.630439 13120 addons.go:435] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
I0317 10:26:45.630486 13120 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
I0317 10:26:45.674679 13120 addons.go:435] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
I0317 10:26:45.674714 13120 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
I0317 10:26:45.716272 13120 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
I0317 10:26:45.791402 13120 addons.go:435] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
I0317 10:26:45.791433 13120 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
I0317 10:26:45.828622 13120 addons.go:435] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
I0317 10:26:45.828649 13120 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
I0317 10:26:46.152885 13120 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
I0317 10:26:46.152907 13120 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
I0317 10:26:46.233201 13120 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
I0317 10:26:46.460419 13120 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
I0317 10:26:46.460446 13120 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
I0317 10:26:46.717610 13120 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
I0317 10:26:46.717636 13120 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
I0317 10:26:47.109864 13120 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
I0317 10:26:47.109885 13120 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
I0317 10:26:47.312741 13120 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
I0317 10:26:47.312775 13120 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
I0317 10:26:47.542828 13120 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
I0317 10:26:49.079453 13120 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.29232149s)
I0317 10:26:49.079499 13120 main.go:141] libmachine: Making call to close driver server
I0317 10:26:49.079515 13120 main.go:141] libmachine: (addons-415393) Calling .Close
I0317 10:26:49.079533 13120 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (4.15485111s)
I0317 10:26:49.079499 13120 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (4.245990547s)
I0317 10:26:49.079576 13120 main.go:141] libmachine: Making call to close driver server
I0317 10:26:49.079594 13120 main.go:141] libmachine: (addons-415393) Calling .Close
I0317 10:26:49.079628 13120 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^ forward . \/etc\/resolv.conf.*/i \ hosts {\n 192.168.39.1 host.minikube.internal\n fallthrough\n }' -e '/^ errors *$/i \ log' | sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (4.154728976s)
I0317 10:26:49.079650 13120 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
I0317 10:26:49.079905 13120 main.go:141] libmachine: (addons-415393) DBG | Closing plugin on server side
I0317 10:26:49.079916 13120 main.go:141] libmachine: Successfully made call to close driver server
I0317 10:26:49.079936 13120 main.go:141] libmachine: Making call to close connection to plugin binary
I0317 10:26:49.079946 13120 main.go:141] libmachine: Making call to close driver server
I0317 10:26:49.079949 13120 main.go:141] libmachine: Successfully made call to close driver server
I0317 10:26:49.079960 13120 main.go:141] libmachine: (addons-415393) Calling .Close
I0317 10:26:49.079961 13120 main.go:141] libmachine: Making call to close connection to plugin binary
I0317 10:26:49.080027 13120 main.go:141] libmachine: Making call to close driver server
I0317 10:26:49.080035 13120 main.go:141] libmachine: (addons-415393) Calling .Close
I0317 10:26:49.080609 13120 node_ready.go:35] waiting up to 6m0s for node "addons-415393" to be "Ready" ...
I0317 10:26:49.080769 13120 main.go:141] libmachine: (addons-415393) DBG | Closing plugin on server side
I0317 10:26:49.080770 13120 main.go:141] libmachine: Successfully made call to close driver server
I0317 10:26:49.080790 13120 main.go:141] libmachine: Making call to close connection to plugin binary
I0317 10:26:49.080821 13120 main.go:141] libmachine: Successfully made call to close driver server
I0317 10:26:49.080846 13120 main.go:141] libmachine: Making call to close connection to plugin binary
I0317 10:26:49.089498 13120 node_ready.go:49] node "addons-415393" has status "Ready":"True"
I0317 10:26:49.089517 13120 node_ready.go:38] duration metric: took 8.882785ms for node "addons-415393" to be "Ready" ...
I0317 10:26:49.089526 13120 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0317 10:26:49.109968 13120 pod_ready.go:79] waiting up to 6m0s for pod "amd-gpu-device-plugin-wwrf9" in "kube-system" namespace to be "Ready" ...
I0317 10:26:49.589007 13120 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-415393" context rescaled to 1 replicas
I0317 10:26:51.203001 13120 pod_ready.go:103] pod "amd-gpu-device-plugin-wwrf9" in "kube-system" namespace has status "Ready":"False"
I0317 10:26:51.240810 13120 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
I0317 10:26:51.240857 13120 main.go:141] libmachine: (addons-415393) Calling .GetSSHHostname
I0317 10:26:51.244224 13120 main.go:141] libmachine: (addons-415393) DBG | domain addons-415393 has defined MAC address 52:54:00:e8:ea:9e in network mk-addons-415393
I0317 10:26:51.244693 13120 main.go:141] libmachine: (addons-415393) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:ea:9e", ip: ""} in network mk-addons-415393: {Iface:virbr1 ExpiryTime:2025-03-17 11:26:15 +0000 UTC Type:0 Mac:52:54:00:e8:ea:9e Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:addons-415393 Clientid:01:52:54:00:e8:ea:9e}
I0317 10:26:51.244740 13120 main.go:141] libmachine: (addons-415393) DBG | domain addons-415393 has defined IP address 192.168.39.132 and MAC address 52:54:00:e8:ea:9e in network mk-addons-415393
I0317 10:26:51.244901 13120 main.go:141] libmachine: (addons-415393) Calling .GetSSHPort
I0317 10:26:51.245106 13120 main.go:141] libmachine: (addons-415393) Calling .GetSSHKeyPath
I0317 10:26:51.245274 13120 main.go:141] libmachine: (addons-415393) Calling .GetSSHUsername
I0317 10:26:51.245420 13120 sshutil.go:53] new ssh client: &{IP:192.168.39.132 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20535-5255/.minikube/machines/addons-415393/id_rsa Username:docker}
I0317 10:26:51.706886 13120 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
I0317 10:26:51.910037 13120 addons.go:238] Setting addon gcp-auth=true in "addons-415393"
I0317 10:26:51.910100 13120 host.go:66] Checking if "addons-415393" exists ...
I0317 10:26:51.910534 13120 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0317 10:26:51.910606 13120 main.go:141] libmachine: Launching plugin server for driver kvm2
I0317 10:26:51.926998 13120 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41049
I0317 10:26:51.927443 13120 main.go:141] libmachine: () Calling .GetVersion
I0317 10:26:51.927919 13120 main.go:141] libmachine: Using API Version 1
I0317 10:26:51.927942 13120 main.go:141] libmachine: () Calling .SetConfigRaw
I0317 10:26:51.928339 13120 main.go:141] libmachine: () Calling .GetMachineName
I0317 10:26:51.928839 13120 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0317 10:26:51.928882 13120 main.go:141] libmachine: Launching plugin server for driver kvm2
I0317 10:26:51.945967 13120 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37453
I0317 10:26:51.946439 13120 main.go:141] libmachine: () Calling .GetVersion
I0317 10:26:51.946881 13120 main.go:141] libmachine: Using API Version 1
I0317 10:26:51.946901 13120 main.go:141] libmachine: () Calling .SetConfigRaw
I0317 10:26:51.947301 13120 main.go:141] libmachine: () Calling .GetMachineName
I0317 10:26:51.947530 13120 main.go:141] libmachine: (addons-415393) Calling .GetState
I0317 10:26:51.949194 13120 main.go:141] libmachine: (addons-415393) Calling .DriverName
I0317 10:26:51.949425 13120 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
I0317 10:26:51.949447 13120 main.go:141] libmachine: (addons-415393) Calling .GetSSHHostname
I0317 10:26:51.952571 13120 main.go:141] libmachine: (addons-415393) DBG | domain addons-415393 has defined MAC address 52:54:00:e8:ea:9e in network mk-addons-415393
I0317 10:26:51.953073 13120 main.go:141] libmachine: (addons-415393) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:ea:9e", ip: ""} in network mk-addons-415393: {Iface:virbr1 ExpiryTime:2025-03-17 11:26:15 +0000 UTC Type:0 Mac:52:54:00:e8:ea:9e Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:addons-415393 Clientid:01:52:54:00:e8:ea:9e}
I0317 10:26:51.953101 13120 main.go:141] libmachine: (addons-415393) DBG | domain addons-415393 has defined IP address 192.168.39.132 and MAC address 52:54:00:e8:ea:9e in network mk-addons-415393
I0317 10:26:51.953286 13120 main.go:141] libmachine: (addons-415393) Calling .GetSSHPort
I0317 10:26:51.953508 13120 main.go:141] libmachine: (addons-415393) Calling .GetSSHKeyPath
I0317 10:26:51.953710 13120 main.go:141] libmachine: (addons-415393) Calling .GetSSHUsername
I0317 10:26:51.953862 13120 sshutil.go:53] new ssh client: &{IP:192.168.39.132 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20535-5255/.minikube/machines/addons-415393/id_rsa Username:docker}
I0317 10:26:52.373572 13120 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (7.388858695s)
I0317 10:26:52.373635 13120 main.go:141] libmachine: Making call to close driver server
I0317 10:26:52.373641 13120 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (7.343846619s)
I0317 10:26:52.373680 13120 main.go:141] libmachine: Making call to close driver server
I0317 10:26:52.373699 13120 main.go:141] libmachine: (addons-415393) Calling .Close
I0317 10:26:52.373700 13120 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (7.337904726s)
I0317 10:26:52.373648 13120 main.go:141] libmachine: (addons-415393) Calling .Close
I0317 10:26:52.373752 13120 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (7.317646198s)
I0317 10:26:52.373722 13120 main.go:141] libmachine: Making call to close driver server
I0317 10:26:52.373776 13120 main.go:141] libmachine: (addons-415393) Calling .Close
I0317 10:26:52.373783 13120 main.go:141] libmachine: Making call to close driver server
I0317 10:26:52.373793 13120 main.go:141] libmachine: (addons-415393) Calling .Close
I0317 10:26:52.373811 13120 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (7.300203428s)
I0317 10:26:52.373835 13120 main.go:141] libmachine: Making call to close driver server
I0317 10:26:52.373848 13120 main.go:141] libmachine: (addons-415393) Calling .Close
I0317 10:26:52.373853 13120 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (7.292008159s)
I0317 10:26:52.373881 13120 main.go:141] libmachine: Making call to close driver server
I0317 10:26:52.373887 13120 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (7.265319101s)
I0317 10:26:52.373897 13120 main.go:141] libmachine: (addons-415393) Calling .Close
I0317 10:26:52.373906 13120 main.go:141] libmachine: Making call to close driver server
I0317 10:26:52.373917 13120 main.go:141] libmachine: (addons-415393) Calling .Close
I0317 10:26:52.373951 13120 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (7.228883616s)
I0317 10:26:52.373967 13120 main.go:141] libmachine: Making call to close driver server
I0317 10:26:52.373975 13120 main.go:141] libmachine: (addons-415393) Calling .Close
I0317 10:26:52.374075 13120 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (6.756051803s)
I0317 10:26:52.374091 13120 main.go:141] libmachine: Making call to close driver server
I0317 10:26:52.374099 13120 main.go:141] libmachine: (addons-415393) Calling .Close
I0317 10:26:52.374170 13120 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (6.657867642s)
I0317 10:26:52.374186 13120 main.go:141] libmachine: Making call to close driver server
I0317 10:26:52.374195 13120 main.go:141] libmachine: (addons-415393) Calling .Close
I0317 10:26:52.374195 13120 main.go:141] libmachine: Successfully made call to close driver server
I0317 10:26:52.374204 13120 main.go:141] libmachine: Making call to close connection to plugin binary
I0317 10:26:52.374212 13120 main.go:141] libmachine: Making call to close driver server
I0317 10:26:52.374219 13120 main.go:141] libmachine: (addons-415393) Calling .Close
I0317 10:26:52.374260 13120 main.go:141] libmachine: Successfully made call to close driver server
I0317 10:26:52.374272 13120 main.go:141] libmachine: Making call to close connection to plugin binary
I0317 10:26:52.374281 13120 main.go:141] libmachine: Making call to close driver server
I0317 10:26:52.374289 13120 main.go:141] libmachine: (addons-415393) Calling .Close
I0317 10:26:52.374312 13120 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (6.141082059s)
W0317 10:26:52.374340 13120 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
stdout:
customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
serviceaccount/snapshot-controller created
clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
deployment.apps/snapshot-controller created
stderr:
error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
ensure CRDs are installed first
I0317 10:26:52.374371 13120 retry.go:31] will retry after 193.427592ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
stdout:
customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
serviceaccount/snapshot-controller created
clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
deployment.apps/snapshot-controller created
stderr:
error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
ensure CRDs are installed first
I0317 10:26:52.374408 13120 main.go:141] libmachine: (addons-415393) DBG | Closing plugin on server side
I0317 10:26:52.374428 13120 main.go:141] libmachine: (addons-415393) DBG | Closing plugin on server side
I0317 10:26:52.374452 13120 main.go:141] libmachine: Successfully made call to close driver server
I0317 10:26:52.374459 13120 main.go:141] libmachine: Making call to close connection to plugin binary
I0317 10:26:52.374467 13120 main.go:141] libmachine: Making call to close driver server
I0317 10:26:52.374473 13120 main.go:141] libmachine: (addons-415393) Calling .Close
I0317 10:26:52.376342 13120 main.go:141] libmachine: (addons-415393) DBG | Closing plugin on server side
I0317 10:26:52.376344 13120 main.go:141] libmachine: Successfully made call to close driver server
I0317 10:26:52.376368 13120 main.go:141] libmachine: Making call to close connection to plugin binary
I0317 10:26:52.376373 13120 main.go:141] libmachine: Successfully made call to close driver server
I0317 10:26:52.376388 13120 main.go:141] libmachine: Making call to close connection to plugin binary
I0317 10:26:52.376394 13120 main.go:141] libmachine: (addons-415393) DBG | Closing plugin on server side
I0317 10:26:52.376396 13120 main.go:141] libmachine: Making call to close driver server
I0317 10:26:52.376406 13120 main.go:141] libmachine: (addons-415393) Calling .Close
I0317 10:26:52.376464 13120 main.go:141] libmachine: Successfully made call to close driver server
I0317 10:26:52.376471 13120 main.go:141] libmachine: Successfully made call to close driver server
I0317 10:26:52.376481 13120 main.go:141] libmachine: Making call to close connection to plugin binary
I0317 10:26:52.376488 13120 main.go:141] libmachine: Making call to close driver server
I0317 10:26:52.376495 13120 main.go:141] libmachine: (addons-415393) Calling .Close
I0317 10:26:52.376522 13120 main.go:141] libmachine: (addons-415393) DBG | Closing plugin on server side
I0317 10:26:52.376544 13120 main.go:141] libmachine: Successfully made call to close driver server
I0317 10:26:52.376551 13120 main.go:141] libmachine: Making call to close connection to plugin binary
I0317 10:26:52.376558 13120 main.go:141] libmachine: Making call to close driver server
I0317 10:26:52.376564 13120 main.go:141] libmachine: (addons-415393) Calling .Close
I0317 10:26:52.376612 13120 main.go:141] libmachine: (addons-415393) DBG | Closing plugin on server side
I0317 10:26:52.376629 13120 main.go:141] libmachine: Successfully made call to close driver server
I0317 10:26:52.376634 13120 main.go:141] libmachine: Making call to close connection to plugin binary
I0317 10:26:52.376641 13120 main.go:141] libmachine: Making call to close driver server
I0317 10:26:52.376647 13120 main.go:141] libmachine: (addons-415393) Calling .Close
I0317 10:26:52.376680 13120 main.go:141] libmachine: (addons-415393) DBG | Closing plugin on server side
I0317 10:26:52.376697 13120 main.go:141] libmachine: Successfully made call to close driver server
I0317 10:26:52.376702 13120 main.go:141] libmachine: Making call to close connection to plugin binary
I0317 10:26:52.376956 13120 main.go:141] libmachine: (addons-415393) DBG | Closing plugin on server side
I0317 10:26:52.376980 13120 main.go:141] libmachine: Successfully made call to close driver server
I0317 10:26:52.376986 13120 main.go:141] libmachine: Making call to close connection to plugin binary
I0317 10:26:52.376994 13120 main.go:141] libmachine: Making call to close driver server
I0317 10:26:52.377000 13120 main.go:141] libmachine: (addons-415393) Calling .Close
I0317 10:26:52.377043 13120 main.go:141] libmachine: (addons-415393) DBG | Closing plugin on server side
I0317 10:26:52.377061 13120 main.go:141] libmachine: Successfully made call to close driver server
I0317 10:26:52.377067 13120 main.go:141] libmachine: Making call to close connection to plugin binary
I0317 10:26:52.377075 13120 addons.go:479] Verifying addon registry=true in "addons-415393"
I0317 10:26:52.377634 13120 main.go:141] libmachine: (addons-415393) DBG | Closing plugin on server side
I0317 10:26:52.376378 13120 main.go:141] libmachine: Making call to close driver server
I0317 10:26:52.378132 13120 main.go:141] libmachine: (addons-415393) Calling .Close
I0317 10:26:52.376473 13120 main.go:141] libmachine: Making call to close connection to plugin binary
I0317 10:26:52.378189 13120 main.go:141] libmachine: Making call to close driver server
I0317 10:26:52.378197 13120 main.go:141] libmachine: (addons-415393) Calling .Close
I0317 10:26:52.378359 13120 main.go:141] libmachine: (addons-415393) DBG | Closing plugin on server side
I0317 10:26:52.378386 13120 main.go:141] libmachine: Successfully made call to close driver server
I0317 10:26:52.378395 13120 main.go:141] libmachine: Making call to close connection to plugin binary
I0317 10:26:52.378458 13120 main.go:141] libmachine: (addons-415393) DBG | Closing plugin on server side
I0317 10:26:52.378483 13120 main.go:141] libmachine: Successfully made call to close driver server
I0317 10:26:52.378489 13120 main.go:141] libmachine: Making call to close connection to plugin binary
I0317 10:26:52.378497 13120 addons.go:479] Verifying addon ingress=true in "addons-415393"
I0317 10:26:52.378593 13120 main.go:141] libmachine: (addons-415393) DBG | Closing plugin on server side
I0317 10:26:52.378626 13120 main.go:141] libmachine: Successfully made call to close driver server
I0317 10:26:52.378634 13120 main.go:141] libmachine: Making call to close connection to plugin binary
I0317 10:26:52.378643 13120 addons.go:479] Verifying addon metrics-server=true in "addons-415393"
I0317 10:26:52.378847 13120 out.go:177] * Verifying registry addon...
I0317 10:26:52.379136 13120 main.go:141] libmachine: (addons-415393) DBG | Closing plugin on server side
I0317 10:26:52.379167 13120 main.go:141] libmachine: Successfully made call to close driver server
I0317 10:26:52.379175 13120 main.go:141] libmachine: Making call to close connection to plugin binary
I0317 10:26:52.379199 13120 main.go:141] libmachine: (addons-415393) DBG | Closing plugin on server side
I0317 10:26:52.379223 13120 main.go:141] libmachine: (addons-415393) DBG | Closing plugin on server side
I0317 10:26:52.379286 13120 main.go:141] libmachine: Successfully made call to close driver server
I0317 10:26:52.379292 13120 main.go:141] libmachine: Making call to close connection to plugin binary
I0317 10:26:52.379531 13120 main.go:141] libmachine: Successfully made call to close driver server
I0317 10:26:52.379542 13120 main.go:141] libmachine: Making call to close connection to plugin binary
I0317 10:26:52.379732 13120 out.go:177] * Verifying ingress addon...
I0317 10:26:52.379816 13120 main.go:141] libmachine: (addons-415393) DBG | Closing plugin on server side
I0317 10:26:52.379847 13120 main.go:141] libmachine: Successfully made call to close driver server
I0317 10:26:52.379855 13120 main.go:141] libmachine: Making call to close connection to plugin binary
I0317 10:26:52.380668 13120 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
I0317 10:26:52.381699 13120 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
I0317 10:26:52.382955 13120 main.go:141] libmachine: Successfully made call to close driver server
I0317 10:26:52.382974 13120 main.go:141] libmachine: Making call to close connection to plugin binary
I0317 10:26:52.383455 13120 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
minikube -p addons-415393 service yakd-dashboard -n yakd-dashboard
I0317 10:26:52.397659 13120 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
I0317 10:26:52.397680 13120 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0317 10:26:52.403833 13120 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
I0317 10:26:52.403856 13120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0317 10:26:52.414565 13120 main.go:141] libmachine: Making call to close driver server
I0317 10:26:52.414591 13120 main.go:141] libmachine: (addons-415393) Calling .Close
I0317 10:26:52.414852 13120 main.go:141] libmachine: (addons-415393) DBG | Closing plugin on server side
I0317 10:26:52.414909 13120 main.go:141] libmachine: Successfully made call to close driver server
I0317 10:26:52.414919 13120 main.go:141] libmachine: Making call to close connection to plugin binary
W0317 10:26:52.415013 13120 out.go:270] ! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class local-path as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
I0317 10:26:52.420441 13120 main.go:141] libmachine: Making call to close driver server
I0317 10:26:52.420459 13120 main.go:141] libmachine: (addons-415393) Calling .Close
I0317 10:26:52.420796 13120 main.go:141] libmachine: Successfully made call to close driver server
I0317 10:26:52.420814 13120 main.go:141] libmachine: Making call to close connection to plugin binary
I0317 10:26:52.420822 13120 main.go:141] libmachine: (addons-415393) DBG | Closing plugin on server side
I0317 10:26:52.568533 13120 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
I0317 10:26:52.886614 13120 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0317 10:26:52.887265 13120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0317 10:26:53.405981 13120 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0317 10:26:53.406029 13120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0317 10:26:53.619640 13120 pod_ready.go:103] pod "amd-gpu-device-plugin-wwrf9" in "kube-system" namespace has status "Ready":"False"
I0317 10:26:53.897512 13120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0317 10:26:53.897537 13120 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0317 10:26:53.979514 13120 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.030064379s)
I0317 10:26:53.979630 13120 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.411062429s)
I0317 10:26:53.979695 13120 main.go:141] libmachine: Making call to close driver server
I0317 10:26:53.979713 13120 main.go:141] libmachine: (addons-415393) Calling .Close
I0317 10:26:53.979743 13120 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (6.436868319s)
I0317 10:26:53.979784 13120 main.go:141] libmachine: Making call to close driver server
I0317 10:26:53.979798 13120 main.go:141] libmachine: (addons-415393) Calling .Close
I0317 10:26:53.980013 13120 main.go:141] libmachine: (addons-415393) DBG | Closing plugin on server side
I0317 10:26:53.980046 13120 main.go:141] libmachine: (addons-415393) DBG | Closing plugin on server side
I0317 10:26:53.980097 13120 main.go:141] libmachine: Successfully made call to close driver server
I0317 10:26:53.980143 13120 main.go:141] libmachine: Making call to close connection to plugin binary
I0317 10:26:53.980158 13120 main.go:141] libmachine: Making call to close driver server
I0317 10:26:53.980168 13120 main.go:141] libmachine: (addons-415393) Calling .Close
I0317 10:26:53.980098 13120 main.go:141] libmachine: Successfully made call to close driver server
I0317 10:26:53.980224 13120 main.go:141] libmachine: Making call to close connection to plugin binary
I0317 10:26:53.980242 13120 main.go:141] libmachine: Making call to close driver server
I0317 10:26:53.980259 13120 main.go:141] libmachine: (addons-415393) Calling .Close
I0317 10:26:53.980367 13120 main.go:141] libmachine: Successfully made call to close driver server
I0317 10:26:53.980381 13120 main.go:141] libmachine: Making call to close connection to plugin binary
I0317 10:26:53.980391 13120 addons.go:479] Verifying addon csi-hostpath-driver=true in "addons-415393"
I0317 10:26:53.980480 13120 main.go:141] libmachine: (addons-415393) DBG | Closing plugin on server side
I0317 10:26:53.980544 13120 main.go:141] libmachine: Successfully made call to close driver server
I0317 10:26:53.980687 13120 main.go:141] libmachine: Making call to close connection to plugin binary
I0317 10:26:53.981964 13120 out.go:177] - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
I0317 10:26:53.982586 13120 out.go:177] * Verifying csi-hostpath-driver addon...
I0317 10:26:53.984077 13120 out.go:177] - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
I0317 10:26:53.984704 13120 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I0317 10:26:53.985444 13120 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
I0317 10:26:53.985460 13120 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
I0317 10:26:54.002819 13120 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I0317 10:26:54.002840 13120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0317 10:26:54.040657 13120 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-service.yaml
I0317 10:26:54.040681 13120 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
I0317 10:26:54.080901 13120 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
I0317 10:26:54.080924 13120 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
I0317 10:26:54.161043 13120 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
I0317 10:26:54.385629 13120 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0317 10:26:54.387020 13120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0317 10:26:54.489005 13120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0317 10:26:54.889320 13120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0317 10:26:54.891311 13120 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0317 10:26:55.000866 13120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0317 10:26:55.241460 13120 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.080375796s)
I0317 10:26:55.241518 13120 main.go:141] libmachine: Making call to close driver server
I0317 10:26:55.241535 13120 main.go:141] libmachine: (addons-415393) Calling .Close
I0317 10:26:55.241860 13120 main.go:141] libmachine: (addons-415393) DBG | Closing plugin on server side
I0317 10:26:55.241970 13120 main.go:141] libmachine: Successfully made call to close driver server
I0317 10:26:55.241991 13120 main.go:141] libmachine: Making call to close connection to plugin binary
I0317 10:26:55.242006 13120 main.go:141] libmachine: Making call to close driver server
I0317 10:26:55.242018 13120 main.go:141] libmachine: (addons-415393) Calling .Close
I0317 10:26:55.242277 13120 main.go:141] libmachine: (addons-415393) DBG | Closing plugin on server side
I0317 10:26:55.242311 13120 main.go:141] libmachine: Successfully made call to close driver server
I0317 10:26:55.242327 13120 main.go:141] libmachine: Making call to close connection to plugin binary
I0317 10:26:55.244031 13120 addons.go:479] Verifying addon gcp-auth=true in "addons-415393"
I0317 10:26:55.245511 13120 out.go:177] * Verifying gcp-auth addon...
I0317 10:26:55.247168 13120 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
I0317 10:26:55.267821 13120 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
I0317 10:26:55.267839 13120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0317 10:26:55.384469 13120 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0317 10:26:55.384647 13120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0317 10:26:55.487858 13120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0317 10:26:55.754202 13120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0317 10:26:55.883811 13120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0317 10:26:55.885683 13120 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0317 10:26:55.988871 13120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0317 10:26:56.115550 13120 pod_ready.go:103] pod "amd-gpu-device-plugin-wwrf9" in "kube-system" namespace has status "Ready":"False"
I0317 10:26:56.250327 13120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0317 10:26:56.384554 13120 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0317 10:26:56.384803 13120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0317 10:26:56.487684 13120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0317 10:26:56.751864 13120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0317 10:26:56.885471 13120 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0317 10:26:56.885490 13120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0317 10:26:56.988385 13120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0317 10:26:57.250532 13120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0317 10:26:57.386430 13120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0317 10:26:57.386970 13120 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0317 10:26:57.488073 13120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0317 10:26:57.750434 13120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0317 10:26:57.884367 13120 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0317 10:26:57.884876 13120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0317 10:26:57.987880 13120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0317 10:26:58.250705 13120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0317 10:26:58.383179 13120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0317 10:26:58.384743 13120 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0317 10:26:58.487578 13120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0317 10:26:58.614623 13120 pod_ready.go:103] pod "amd-gpu-device-plugin-wwrf9" in "kube-system" namespace has status "Ready":"False"
I0317 10:26:58.750528 13120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0317 10:26:58.885350 13120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0317 10:26:58.885458 13120 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0317 10:26:58.988784 13120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0317 10:26:59.250240 13120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0317 10:26:59.383641 13120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0317 10:26:59.385424 13120 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0317 10:26:59.488065 13120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0317 10:26:59.750663 13120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0317 10:26:59.885506 13120 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0317 10:26:59.886459 13120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0317 10:26:59.988357 13120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0317 10:27:00.395303 13120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0317 10:27:00.492886 13120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0317 10:27:00.492886 13120 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0317 10:27:00.493249 13120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0317 10:27:00.750414 13120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0317 10:27:00.885223 13120 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0317 10:27:00.885656 13120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0317 10:27:00.988972 13120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0317 10:27:01.114692 13120 pod_ready.go:103] pod "amd-gpu-device-plugin-wwrf9" in "kube-system" namespace has status "Ready":"False"
I0317 10:27:01.250119 13120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0317 10:27:01.384030 13120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0317 10:27:01.385368 13120 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0317 10:27:01.489654 13120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0317 10:27:01.750771 13120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0317 10:27:01.884824 13120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0317 10:27:01.884824 13120 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0317 10:27:01.987871 13120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0317 10:27:02.251518 13120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0317 10:27:02.629196 13120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0317 10:27:02.629217 13120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0317 10:27:02.629467 13120 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0317 10:27:02.749888 13120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0317 10:27:02.883947 13120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0317 10:27:02.884320 13120 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0317 10:27:03.083847 13120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0317 10:27:03.116034 13120 pod_ready.go:103] pod "amd-gpu-device-plugin-wwrf9" in "kube-system" namespace has status "Ready":"False"
I0317 10:27:03.296686 13120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0317 10:27:03.384405 13120 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0317 10:27:03.384829 13120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0317 10:27:03.487739 13120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0317 10:27:03.750636 13120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0317 10:27:03.884888 13120 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0317 10:27:03.885691 13120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0317 10:27:03.988868 13120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0317 10:27:04.250275 13120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0317 10:27:04.383831 13120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0317 10:27:04.386024 13120 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0317 10:27:04.488649 13120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0317 10:27:05.001922 13120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0317 10:27:05.004948 13120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0317 10:27:05.005504 13120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0317 10:27:05.005721 13120 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0317 10:27:05.250491 13120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0317 10:27:05.384920 13120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0317 10:27:05.385131 13120 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0317 10:27:05.488351 13120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0317 10:27:05.615187 13120 pod_ready.go:103] pod "amd-gpu-device-plugin-wwrf9" in "kube-system" namespace has status "Ready":"False"
I0317 10:27:05.750735 13120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0317 10:27:05.885263 13120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0317 10:27:05.886438 13120 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0317 10:27:05.988389 13120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0317 10:27:06.251829 13120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0317 10:27:06.383351 13120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0317 10:27:06.384914 13120 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0317 10:27:06.487909 13120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0317 10:27:06.750039 13120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0317 10:27:06.883833 13120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0317 10:27:06.885076 13120 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0317 10:27:06.988245 13120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0317 10:27:07.251325 13120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0317 10:27:07.384843 13120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0317 10:27:07.384919 13120 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0317 10:27:07.488669 13120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0317 10:27:07.751144 13120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0317 10:27:07.884302 13120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0317 10:27:07.885693 13120 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0317 10:27:07.988352 13120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0317 10:27:08.114632 13120 pod_ready.go:103] pod "amd-gpu-device-plugin-wwrf9" in "kube-system" namespace has status "Ready":"False"
I0317 10:27:08.251633 13120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0317 10:27:08.383610 13120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0317 10:27:08.384263 13120 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0317 10:27:08.488151 13120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0317 10:27:08.750048 13120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0317 10:27:08.883927 13120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0317 10:27:08.884472 13120 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0317 10:27:08.988523 13120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0317 10:27:09.251411 13120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0317 10:27:09.383900 13120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0317 10:27:09.384999 13120 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0317 10:27:09.488697 13120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0317 10:27:09.750292 13120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0317 10:27:09.884104 13120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0317 10:27:09.884911 13120 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0317 10:27:09.987942 13120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0317 10:27:10.131768 13120 pod_ready.go:103] pod "amd-gpu-device-plugin-wwrf9" in "kube-system" namespace has status "Ready":"False"
I0317 10:27:10.251888 13120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0317 10:27:10.383715 13120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0317 10:27:10.385034 13120 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0317 10:27:10.488166 13120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0317 10:27:10.971915 13120 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0317 10:27:10.972260 13120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0317 10:27:10.973377 13120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0317 10:27:10.991775 13120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0317 10:27:11.250754 13120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0317 10:27:11.384335 13120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0317 10:27:11.384967 13120 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0317 10:27:11.487808 13120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0317 10:27:11.750403 13120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0317 10:27:11.884333 13120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0317 10:27:11.885004 13120 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0317 10:27:11.988026 13120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0317 10:27:12.250957 13120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0317 10:27:12.384765 13120 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0317 10:27:12.385211 13120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0317 10:27:12.488630 13120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0317 10:27:12.615102 13120 pod_ready.go:103] pod "amd-gpu-device-plugin-wwrf9" in "kube-system" namespace has status "Ready":"False"
I0317 10:27:12.750894 13120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0317 10:27:12.884340 13120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0317 10:27:12.885184 13120 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0317 10:27:12.988665 13120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0317 10:27:13.251537 13120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0317 10:27:13.385101 13120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0317 10:27:13.385308 13120 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0317 10:27:13.491077 13120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0317 10:27:13.750535 13120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0317 10:27:13.885453 13120 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0317 10:27:13.885550 13120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0317 10:27:13.989766 13120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0317 10:27:14.251266 13120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0317 10:27:14.387575 13120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0317 10:27:14.388477 13120 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0317 10:27:14.489720 13120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0317 10:27:14.616207 13120 pod_ready.go:103] pod "amd-gpu-device-plugin-wwrf9" in "kube-system" namespace has status "Ready":"False"
I0317 10:27:14.750404 13120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0317 10:27:14.885093 13120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0317 10:27:14.885392 13120 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0317 10:27:14.988785 13120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0317 10:27:15.250548 13120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0317 10:27:15.385749 13120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0317 10:27:15.386380 13120 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0317 10:27:15.488394 13120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0317 10:27:15.750608 13120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0317 10:27:15.884754 13120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0317 10:27:15.884858 13120 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0317 10:27:15.987561 13120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0317 10:27:16.251292 13120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0317 10:27:16.384328 13120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0317 10:27:16.384926 13120 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0317 10:27:16.487877 13120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0317 10:27:16.751131 13120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0317 10:27:16.884124 13120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0317 10:27:16.884800 13120 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0317 10:27:16.988226 13120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0317 10:27:17.115421 13120 pod_ready.go:103] pod "amd-gpu-device-plugin-wwrf9" in "kube-system" namespace has status "Ready":"False"
I0317 10:27:17.250267 13120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0317 10:27:17.385553 13120 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0317 10:27:17.385656 13120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0317 10:27:17.488693 13120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0317 10:27:17.615294 13120 pod_ready.go:93] pod "amd-gpu-device-plugin-wwrf9" in "kube-system" namespace has status "Ready":"True"
I0317 10:27:17.615323 13120 pod_ready.go:82] duration metric: took 28.505330718s for pod "amd-gpu-device-plugin-wwrf9" in "kube-system" namespace to be "Ready" ...
I0317 10:27:17.615335 13120 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-vj8q2" in "kube-system" namespace to be "Ready" ...
I0317 10:27:17.616978 13120 pod_ready.go:98] error getting pod "coredns-668d6bf9bc-vj8q2" in "kube-system" namespace (skipping!): pods "coredns-668d6bf9bc-vj8q2" not found
I0317 10:27:17.616997 13120 pod_ready.go:82] duration metric: took 1.655718ms for pod "coredns-668d6bf9bc-vj8q2" in "kube-system" namespace to be "Ready" ...
E0317 10:27:17.617007 13120 pod_ready.go:67] WaitExtra: waitPodCondition: error getting pod "coredns-668d6bf9bc-vj8q2" in "kube-system" namespace (skipping!): pods "coredns-668d6bf9bc-vj8q2" not found
I0317 10:27:17.617013 13120 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-z6pcb" in "kube-system" namespace to be "Ready" ...
I0317 10:27:17.620076 13120 pod_ready.go:93] pod "coredns-668d6bf9bc-z6pcb" in "kube-system" namespace has status "Ready":"True"
I0317 10:27:17.620090 13120 pod_ready.go:82] duration metric: took 3.071393ms for pod "coredns-668d6bf9bc-z6pcb" in "kube-system" namespace to be "Ready" ...
I0317 10:27:17.620098 13120 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-415393" in "kube-system" namespace to be "Ready" ...
I0317 10:27:17.624687 13120 pod_ready.go:93] pod "etcd-addons-415393" in "kube-system" namespace has status "Ready":"True"
I0317 10:27:17.624704 13120 pod_ready.go:82] duration metric: took 4.600832ms for pod "etcd-addons-415393" in "kube-system" namespace to be "Ready" ...
I0317 10:27:17.624711 13120 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-415393" in "kube-system" namespace to be "Ready" ...
I0317 10:27:17.628548 13120 pod_ready.go:93] pod "kube-apiserver-addons-415393" in "kube-system" namespace has status "Ready":"True"
I0317 10:27:17.628561 13120 pod_ready.go:82] duration metric: took 3.820591ms for pod "kube-apiserver-addons-415393" in "kube-system" namespace to be "Ready" ...
I0317 10:27:17.628569 13120 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-415393" in "kube-system" namespace to be "Ready" ...
I0317 10:27:17.750591 13120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0317 10:27:17.814734 13120 pod_ready.go:93] pod "kube-controller-manager-addons-415393" in "kube-system" namespace has status "Ready":"True"
I0317 10:27:17.814754 13120 pod_ready.go:82] duration metric: took 186.179858ms for pod "kube-controller-manager-addons-415393" in "kube-system" namespace to be "Ready" ...
I0317 10:27:17.814764 13120 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-s56k7" in "kube-system" namespace to be "Ready" ...
I0317 10:27:17.883547 13120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0317 10:27:17.884886 13120 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0317 10:27:17.989629 13120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0317 10:27:18.215878 13120 pod_ready.go:93] pod "kube-proxy-s56k7" in "kube-system" namespace has status "Ready":"True"
I0317 10:27:18.215900 13120 pod_ready.go:82] duration metric: took 401.129933ms for pod "kube-proxy-s56k7" in "kube-system" namespace to be "Ready" ...
I0317 10:27:18.215908 13120 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-415393" in "kube-system" namespace to be "Ready" ...
I0317 10:27:18.251042 13120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0317 10:27:18.384211 13120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0317 10:27:18.386352 13120 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0317 10:27:18.488381 13120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0317 10:27:18.613952 13120 pod_ready.go:93] pod "kube-scheduler-addons-415393" in "kube-system" namespace has status "Ready":"True"
I0317 10:27:18.613972 13120 pod_ready.go:82] duration metric: took 398.058087ms for pod "kube-scheduler-addons-415393" in "kube-system" namespace to be "Ready" ...
I0317 10:27:18.613981 13120 pod_ready.go:79] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-492js" in "kube-system" namespace to be "Ready" ...
I0317 10:27:18.751421 13120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0317 10:27:18.885485 13120 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0317 10:27:18.885861 13120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0317 10:27:18.987869 13120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0317 10:27:19.014705 13120 pod_ready.go:93] pod "nvidia-device-plugin-daemonset-492js" in "kube-system" namespace has status "Ready":"True"
I0317 10:27:19.014735 13120 pod_ready.go:82] duration metric: took 400.746877ms for pod "nvidia-device-plugin-daemonset-492js" in "kube-system" namespace to be "Ready" ...
I0317 10:27:19.014749 13120 pod_ready.go:39] duration metric: took 29.925212547s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0317 10:27:19.014767 13120 api_server.go:52] waiting for apiserver process to appear ...
I0317 10:27:19.014827 13120 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0317 10:27:19.031628 13120 api_server.go:72] duration metric: took 34.786835266s to wait for apiserver process to appear ...
I0317 10:27:19.031653 13120 api_server.go:88] waiting for apiserver healthz status ...
I0317 10:27:19.031673 13120 api_server.go:253] Checking apiserver healthz at https://192.168.39.132:8443/healthz ...
I0317 10:27:19.036621 13120 api_server.go:279] https://192.168.39.132:8443/healthz returned 200:
ok
I0317 10:27:19.037647 13120 api_server.go:141] control plane version: v1.32.2
I0317 10:27:19.037670 13120 api_server.go:131] duration metric: took 6.011281ms to wait for apiserver health ...
I0317 10:27:19.037677 13120 system_pods.go:43] waiting for kube-system pods to appear ...
I0317 10:27:19.215397 13120 system_pods.go:59] 18 kube-system pods found
I0317 10:27:19.215438 13120 system_pods.go:61] "amd-gpu-device-plugin-wwrf9" [3d8bbaa4-cc81-4d9b-8a32-241d468adc22] Running
I0317 10:27:19.215446 13120 system_pods.go:61] "coredns-668d6bf9bc-z6pcb" [b5987f43-3744-41ad-952e-d2dcdb1cb8fe] Running
I0317 10:27:19.215454 13120 system_pods.go:61] "csi-hostpath-attacher-0" [6f54e0cc-d6b0-4439-ac5d-72a4ca1a0e3a] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
I0317 10:27:19.215463 13120 system_pods.go:61] "csi-hostpath-resizer-0" [cf5c147c-8add-429f-8e3d-22a1a3626fdb] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
I0317 10:27:19.215488 13120 system_pods.go:61] "csi-hostpathplugin-bcvd8" [1b68c24f-6f09-466e-845d-62d4ef663eb7] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
I0317 10:27:19.215499 13120 system_pods.go:61] "etcd-addons-415393" [137933c7-4497-4d10-b4ab-5b99011637a3] Running
I0317 10:27:19.215505 13120 system_pods.go:61] "kube-apiserver-addons-415393" [5da7c970-4c2a-402e-a770-d3fb46d8bc33] Running
I0317 10:27:19.215510 13120 system_pods.go:61] "kube-controller-manager-addons-415393" [fe0d8ed4-ea1c-40a5-a079-c69f095f1d74] Running
I0317 10:27:19.215516 13120 system_pods.go:61] "kube-ingress-dns-minikube" [cdcc8a22-ed21-4e3a-a7be-cd1bf7035f08] Running
I0317 10:27:19.215522 13120 system_pods.go:61] "kube-proxy-s56k7" [4cf85691-4d8b-4d73-ba24-40607a1b54fd] Running
I0317 10:27:19.215530 13120 system_pods.go:61] "kube-scheduler-addons-415393" [6f1ecd35-ef0d-46cb-8b93-94c618a7c477] Running
I0317 10:27:19.215538 13120 system_pods.go:61] "metrics-server-7fbb699795-kd8n5" [e08189bf-2a1a-4534-9267-741405cbfa16] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
I0317 10:27:19.215547 13120 system_pods.go:61] "nvidia-device-plugin-daemonset-492js" [943e9bf8-23fb-44bc-a308-82172406deb9] Running
I0317 10:27:19.215553 13120 system_pods.go:61] "registry-6c88467877-rnqb2" [7aecacb8-b0a9-4043-b627-b18c2af11578] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
I0317 10:27:19.215562 13120 system_pods.go:61] "registry-proxy-6mwt4" [14ad57b8-46cd-497e-a97a-bb7047e74826] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
I0317 10:27:19.215572 13120 system_pods.go:61] "snapshot-controller-68b874b76f-fjqz5" [2b441913-925d-4780-9896-3904e05ad034] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
I0317 10:27:19.215584 13120 system_pods.go:61] "snapshot-controller-68b874b76f-h7sc6" [0f0b28fa-c4c1-45ee-b1fc-87d55fe5c106] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
I0317 10:27:19.215591 13120 system_pods.go:61] "storage-provisioner" [5d55183f-b056-4716-8c4a-ec30a50fc604] Running
I0317 10:27:19.215599 13120 system_pods.go:74] duration metric: took 177.916175ms to wait for pod list to return data ...
I0317 10:27:19.215609 13120 default_sa.go:34] waiting for default service account to be created ...
I0317 10:27:19.250642 13120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0317 10:27:19.385763 13120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0317 10:27:19.385909 13120 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0317 10:27:19.414129 13120 default_sa.go:45] found service account: "default"
I0317 10:27:19.414158 13120 default_sa.go:55] duration metric: took 198.542505ms for default service account to be created ...
I0317 10:27:19.414167 13120 system_pods.go:116] waiting for k8s-apps to be running ...
I0317 10:27:19.488700 13120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0317 10:27:19.615875 13120 system_pods.go:86] 18 kube-system pods found
I0317 10:27:19.615913 13120 system_pods.go:89] "amd-gpu-device-plugin-wwrf9" [3d8bbaa4-cc81-4d9b-8a32-241d468adc22] Running
I0317 10:27:19.615923 13120 system_pods.go:89] "coredns-668d6bf9bc-z6pcb" [b5987f43-3744-41ad-952e-d2dcdb1cb8fe] Running
I0317 10:27:19.615935 13120 system_pods.go:89] "csi-hostpath-attacher-0" [6f54e0cc-d6b0-4439-ac5d-72a4ca1a0e3a] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
I0317 10:27:19.615944 13120 system_pods.go:89] "csi-hostpath-resizer-0" [cf5c147c-8add-429f-8e3d-22a1a3626fdb] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
I0317 10:27:19.615954 13120 system_pods.go:89] "csi-hostpathplugin-bcvd8" [1b68c24f-6f09-466e-845d-62d4ef663eb7] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
I0317 10:27:19.615960 13120 system_pods.go:89] "etcd-addons-415393" [137933c7-4497-4d10-b4ab-5b99011637a3] Running
I0317 10:27:19.615966 13120 system_pods.go:89] "kube-apiserver-addons-415393" [5da7c970-4c2a-402e-a770-d3fb46d8bc33] Running
I0317 10:27:19.615972 13120 system_pods.go:89] "kube-controller-manager-addons-415393" [fe0d8ed4-ea1c-40a5-a079-c69f095f1d74] Running
I0317 10:27:19.615981 13120 system_pods.go:89] "kube-ingress-dns-minikube" [cdcc8a22-ed21-4e3a-a7be-cd1bf7035f08] Running
I0317 10:27:19.615986 13120 system_pods.go:89] "kube-proxy-s56k7" [4cf85691-4d8b-4d73-ba24-40607a1b54fd] Running
I0317 10:27:19.615992 13120 system_pods.go:89] "kube-scheduler-addons-415393" [6f1ecd35-ef0d-46cb-8b93-94c618a7c477] Running
I0317 10:27:19.616000 13120 system_pods.go:89] "metrics-server-7fbb699795-kd8n5" [e08189bf-2a1a-4534-9267-741405cbfa16] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
I0317 10:27:19.616009 13120 system_pods.go:89] "nvidia-device-plugin-daemonset-492js" [943e9bf8-23fb-44bc-a308-82172406deb9] Running
I0317 10:27:19.616017 13120 system_pods.go:89] "registry-6c88467877-rnqb2" [7aecacb8-b0a9-4043-b627-b18c2af11578] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
I0317 10:27:19.616025 13120 system_pods.go:89] "registry-proxy-6mwt4" [14ad57b8-46cd-497e-a97a-bb7047e74826] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
I0317 10:27:19.616036 13120 system_pods.go:89] "snapshot-controller-68b874b76f-fjqz5" [2b441913-925d-4780-9896-3904e05ad034] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
I0317 10:27:19.616048 13120 system_pods.go:89] "snapshot-controller-68b874b76f-h7sc6" [0f0b28fa-c4c1-45ee-b1fc-87d55fe5c106] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
I0317 10:27:19.616055 13120 system_pods.go:89] "storage-provisioner" [5d55183f-b056-4716-8c4a-ec30a50fc604] Running
I0317 10:27:19.616068 13120 system_pods.go:126] duration metric: took 201.894799ms to wait for k8s-apps to be running ...
I0317 10:27:19.616080 13120 system_svc.go:44] waiting for kubelet service to be running ....
I0317 10:27:19.616135 13120 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I0317 10:27:19.631357 13120 system_svc.go:56] duration metric: took 15.270782ms WaitForService to wait for kubelet
I0317 10:27:19.631382 13120 kubeadm.go:582] duration metric: took 35.386594302s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I0317 10:27:19.631398 13120 node_conditions.go:102] verifying NodePressure condition ...
I0317 10:27:19.750471 13120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0317 10:27:19.814423 13120 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
I0317 10:27:19.814453 13120 node_conditions.go:123] node cpu capacity is 2
I0317 10:27:19.814469 13120 node_conditions.go:105] duration metric: took 183.06596ms to run NodePressure ...
I0317 10:27:19.814482 13120 start.go:241] waiting for startup goroutines ...
I0317 10:27:19.885125 13120 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0317 10:27:19.885146 13120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0317 10:27:19.988066 13120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0317 10:27:20.251693 13120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0317 10:27:20.386341 13120 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0317 10:27:20.386477 13120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0317 10:27:20.489261 13120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0317 10:27:20.750340 13120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0317 10:27:20.886190 13120 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0317 10:27:20.886191 13120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0317 10:27:20.988592 13120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0317 10:27:21.253009 13120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0317 10:27:21.385046 13120 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0317 10:27:21.385174 13120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0317 10:27:21.489987 13120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0317 10:27:21.751007 13120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0317 10:27:21.884527 13120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0317 10:27:21.884891 13120 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0317 10:27:21.987912 13120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0317 10:27:22.250909 13120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0317 10:27:22.383804 13120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0317 10:27:22.384810 13120 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0317 10:27:22.488177 13120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0317 10:27:22.750033 13120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0317 10:27:22.884124 13120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0317 10:27:22.884847 13120 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0317 10:27:22.987703 13120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0317 10:27:23.251177 13120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0317 10:27:23.384319 13120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0317 10:27:23.385451 13120 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0317 10:27:23.488736 13120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0317 10:27:23.750834 13120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0317 10:27:23.883978 13120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0317 10:27:23.885576 13120 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0317 10:27:23.988485 13120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0317 10:27:24.344771 13120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0317 10:27:24.384142 13120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0317 10:27:24.385500 13120 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0317 10:27:24.487990 13120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0317 10:27:24.751771 13120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0317 10:27:24.884120 13120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0317 10:27:24.888120 13120 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0317 10:27:24.988464 13120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0317 10:27:25.250655 13120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0317 10:27:25.385710 13120 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0317 10:27:25.385728 13120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0317 10:27:25.488203 13120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0317 10:27:25.750724 13120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0317 10:27:25.884794 13120 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0317 10:27:25.884830 13120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0317 10:27:25.989863 13120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0317 10:27:26.250951 13120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0317 10:27:26.391962 13120 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0317 10:27:26.392241 13120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0317 10:27:26.490080 13120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0317 10:27:26.751112 13120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0317 10:27:26.884901 13120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0317 10:27:26.886626 13120 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0317 10:27:26.987698 13120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0317 10:27:27.252655 13120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0317 10:27:27.384836 13120 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0317 10:27:27.384878 13120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0317 10:27:27.488299 13120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0317 10:27:27.750404 13120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0317 10:27:27.884967 13120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0317 10:27:27.885230 13120 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0317 10:27:27.988139 13120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0317 10:27:28.250987 13120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0317 10:27:28.385304 13120 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0317 10:27:28.385508 13120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0317 10:27:28.488256 13120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0317 10:27:28.750056 13120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0317 10:27:28.884117 13120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0317 10:27:28.886052 13120 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0317 10:27:28.987910 13120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0317 10:27:29.696346 13120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0317 10:27:29.696663 13120 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0317 10:27:29.696750 13120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0317 10:27:29.696862 13120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0317 10:27:29.750556 13120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0317 10:27:29.884963 13120 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0317 10:27:29.885213 13120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0317 10:27:29.988337 13120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0317 10:27:30.250907 13120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0317 10:27:30.383942 13120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0317 10:27:30.384537 13120 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0317 10:27:30.488912 13120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0317 10:27:30.750687 13120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0317 10:27:30.883435 13120 kapi.go:107] duration metric: took 38.502765074s to wait for kubernetes.io/minikube-addons=registry ...
I0317 10:27:30.884786 13120 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0317 10:27:30.987745 13120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0317 10:27:31.253359 13120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0317 10:27:31.385257 13120 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0317 10:27:31.489095 13120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0317 10:27:31.751816 13120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0317 10:27:31.885072 13120 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0317 10:27:31.988082 13120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0317 10:27:32.249744 13120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0317 10:27:32.384552 13120 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0317 10:27:32.488534 13120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0317 10:27:32.750404 13120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0317 10:27:32.885485 13120 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0317 10:27:32.988690 13120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0317 10:27:33.251183 13120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0317 10:27:33.386268 13120 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0317 10:27:33.488162 13120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0317 10:27:33.750018 13120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0317 10:27:34.235037 13120 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0317 10:27:34.235668 13120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0317 10:27:34.250311 13120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0317 10:27:34.385027 13120 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0317 10:27:34.488584 13120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0317 10:27:34.750836 13120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0317 10:27:34.885846 13120 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0317 10:27:34.988107 13120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0317 10:27:35.251258 13120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0317 10:27:35.385275 13120 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0317 10:27:35.492512 13120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0317 10:27:35.750131 13120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0317 10:27:35.885809 13120 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0317 10:27:35.988480 13120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0317 10:27:36.250206 13120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0317 10:27:36.385317 13120 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0317 10:27:36.488340 13120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0317 10:27:36.750290 13120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0317 10:27:36.885601 13120 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0317 10:27:36.988464 13120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0317 10:27:37.250834 13120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0317 10:27:37.384643 13120 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0317 10:27:37.488226 13120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0317 10:27:37.749652 13120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0317 10:27:37.885424 13120 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0317 10:27:37.988069 13120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0317 10:27:38.256602 13120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0317 10:27:38.385402 13120 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0317 10:27:38.488491 13120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0317 10:27:38.750753 13120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0317 10:27:38.884567 13120 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0317 10:27:38.988536 13120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0317 10:27:39.475222 13120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0317 10:27:39.475376 13120 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0317 10:27:39.488064 13120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0317 10:27:39.749743 13120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0317 10:27:39.885579 13120 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0317 10:27:39.988860 13120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0317 10:27:40.253960 13120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0317 10:27:40.384596 13120 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0317 10:27:40.488253 13120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0317 10:27:40.750324 13120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0317 10:27:40.925853 13120 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0317 10:27:40.988307 13120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0317 10:27:41.249838 13120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0317 10:27:41.384658 13120 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0317 10:27:41.757166 13120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0317 10:27:41.758111 13120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0317 10:27:41.886544 13120 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0317 10:27:41.991798 13120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0317 10:27:42.250683 13120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0317 10:27:42.386300 13120 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0317 10:27:42.488251 13120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0317 10:27:42.751541 13120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0317 10:27:42.891141 13120 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0317 10:27:42.992966 13120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0317 10:27:43.250891 13120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0317 10:27:43.384704 13120 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0317 10:27:43.489627 13120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0317 10:27:43.750602 13120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0317 10:27:43.887408 13120 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0317 10:27:43.990444 13120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0317 10:27:44.250102 13120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0317 10:27:44.384605 13120 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0317 10:27:44.488041 13120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0317 10:27:44.751852 13120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0317 10:27:44.886904 13120 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0317 10:27:44.987944 13120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0317 10:27:45.250351 13120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0317 10:27:45.385640 13120 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0317 10:27:45.488286 13120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0317 10:27:45.751297 13120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0317 10:27:45.888003 13120 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0317 10:27:45.988297 13120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0317 10:27:46.252500 13120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0317 10:27:46.386615 13120 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0317 10:27:46.488949 13120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0317 10:27:46.751383 13120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0317 10:27:46.888343 13120 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0317 10:27:46.988386 13120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0317 10:27:47.249909 13120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0317 10:27:47.384820 13120 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0317 10:27:47.487679 13120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0317 10:27:47.750831 13120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0317 10:27:47.885122 13120 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0317 10:27:47.988479 13120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0317 10:27:48.250226 13120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0317 10:27:48.385558 13120 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0317 10:27:48.489113 13120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0317 10:27:48.751666 13120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0317 10:27:48.884742 13120 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0317 10:27:48.998615 13120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0317 10:27:49.250696 13120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0317 10:27:49.385132 13120 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0317 10:27:49.490144 13120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0317 10:27:49.751142 13120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0317 10:27:50.137818 13120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0317 10:27:50.137953 13120 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0317 10:27:50.250775 13120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0317 10:27:50.385571 13120 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0317 10:27:50.489482 13120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0317 10:27:50.750885 13120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0317 10:27:50.884959 13120 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0317 10:27:50.988123 13120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0317 10:27:51.264627 13120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0317 10:27:51.385953 13120 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0317 10:27:51.487900 13120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0317 10:27:51.751755 13120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0317 10:27:51.894118 13120 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0317 10:27:51.994494 13120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0317 10:27:52.253483 13120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0317 10:27:52.385559 13120 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0317 10:27:52.488414 13120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0317 10:27:52.750824 13120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0317 10:27:52.885034 13120 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0317 10:27:52.988405 13120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0317 10:27:53.250958 13120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0317 10:27:53.386021 13120 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0317 10:27:53.495154 13120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0317 10:27:53.754231 13120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0317 10:27:53.886941 13120 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0317 10:27:53.988265 13120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0317 10:27:54.250520 13120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0317 10:27:54.385141 13120 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0317 10:27:54.488234 13120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0317 10:27:54.752429 13120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0317 10:27:54.886102 13120 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0317 10:27:54.988449 13120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0317 10:27:55.250826 13120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0317 10:27:55.385370 13120 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0317 10:27:55.498123 13120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0317 10:27:55.749891 13120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0317 10:27:55.884901 13120 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0317 10:27:55.987843 13120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0317 10:27:56.578017 13120 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0317 10:27:56.578128 13120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0317 10:27:56.578261 13120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0317 10:27:56.750150 13120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0317 10:27:56.887683 13120 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0317 10:27:56.989057 13120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0317 10:27:57.249887 13120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0317 10:27:57.387076 13120 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0317 10:27:57.488155 13120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0317 10:27:57.755129 13120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0317 10:27:57.886481 13120 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0317 10:27:57.988588 13120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0317 10:27:58.250265 13120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0317 10:27:58.384952 13120 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0317 10:27:58.487964 13120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0317 10:27:58.751180 13120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0317 10:27:58.891106 13120 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0317 10:27:58.989195 13120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0317 10:27:59.250912 13120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0317 10:27:59.385369 13120 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0317 10:27:59.488085 13120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0317 10:27:59.750195 13120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0317 10:27:59.889178 13120 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0317 10:27:59.990289 13120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0317 10:28:00.251352 13120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0317 10:28:00.385815 13120 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0317 10:28:00.494615 13120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0317 10:28:00.750834 13120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0317 10:28:00.890944 13120 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0317 10:28:00.988034 13120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0317 10:28:01.250779 13120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0317 10:28:01.384481 13120 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0317 10:28:01.488302 13120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0317 10:28:01.891885 13120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0317 10:28:02.116213 13120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0317 10:28:02.116351 13120 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0317 10:28:02.252872 13120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0317 10:28:02.384812 13120 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0317 10:28:02.489607 13120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0317 10:28:02.750508 13120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0317 10:28:02.885495 13120 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0317 10:28:02.988591 13120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0317 10:28:03.250165 13120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0317 10:28:03.385311 13120 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0317 10:28:03.488495 13120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0317 10:28:03.750395 13120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0317 10:28:03.885322 13120 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0317 10:28:03.988624 13120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0317 10:28:04.250888 13120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0317 10:28:04.385596 13120 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0317 10:28:04.488041 13120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0317 10:28:04.750531 13120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0317 10:28:04.888388 13120 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0317 10:28:04.994184 13120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0317 10:28:05.260078 13120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0317 10:28:05.389076 13120 kapi.go:107] duration metric: took 1m13.00737091s to wait for app.kubernetes.io/name=ingress-nginx ...
I0317 10:28:05.488051 13120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0317 10:28:05.749917 13120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0317 10:28:05.988233 13120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0317 10:28:06.249781 13120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0317 10:28:06.488523 13120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0317 10:28:06.751501 13120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0317 10:28:06.993679 13120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0317 10:28:07.253115 13120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0317 10:28:07.488984 13120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0317 10:28:07.751095 13120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0317 10:28:07.988330 13120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0317 10:28:08.250383 13120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0317 10:28:08.488158 13120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0317 10:28:08.749966 13120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0317 10:28:08.988204 13120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0317 10:28:09.249682 13120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0317 10:28:09.488428 13120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0317 10:28:09.751358 13120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0317 10:28:09.988827 13120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0317 10:28:10.253108 13120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0317 10:28:10.488525 13120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0317 10:28:10.750591 13120 kapi.go:107] duration metric: took 1m15.50341727s to wait for kubernetes.io/minikube-addons=gcp-auth ...
I0317 10:28:10.752305 13120 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-415393 cluster.
I0317 10:28:10.753639 13120 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
I0317 10:28:10.754920 13120 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
I0317 10:28:10.989426 13120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0317 10:28:11.489667 13120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0317 10:28:11.988950 13120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0317 10:28:12.488704 13120 kapi.go:107] duration metric: took 1m18.503997742s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
I0317 10:28:12.490495 13120 out.go:177] * Enabled addons: storage-provisioner, nvidia-device-plugin, cloud-spanner, metrics-server, inspektor-gadget, amd-gpu-device-plugin, ingress-dns, yakd, default-storageclass, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
I0317 10:28:12.491705 13120 addons.go:514] duration metric: took 1m28.246892707s for enable addons: enabled=[storage-provisioner nvidia-device-plugin cloud-spanner metrics-server inspektor-gadget amd-gpu-device-plugin ingress-dns yakd default-storageclass volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
I0317 10:28:12.491746 13120 start.go:246] waiting for cluster config update ...
I0317 10:28:12.491762 13120 start.go:255] writing updated cluster config ...
I0317 10:28:12.491995 13120 ssh_runner.go:195] Run: rm -f paused
I0317 10:28:12.543857 13120 start.go:600] kubectl: 1.32.3, cluster: 1.32.2 (minor skew: 0)
I0317 10:28:12.545653 13120 out.go:177] * Done! kubectl is now configured to use "addons-415393" cluster and "default" namespace by default
==> CRI-O <==
Mar 17 10:31:10 addons-415393 crio[663]: time="2025-03-17 10:31:10.172599772Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1742207470172573952,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595375,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=5b163808-05ad-431d-9f24-0d9c78b79dba name=/runtime.v1.ImageService/ImageFsInfo
Mar 17 10:31:10 addons-415393 crio[663]: time="2025-03-17 10:31:10.173160092Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=cc73e2ac-e7dd-4537-95bd-d8335a64b010 name=/runtime.v1.RuntimeService/ListContainers
Mar 17 10:31:10 addons-415393 crio[663]: time="2025-03-17 10:31:10.173225921Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=cc73e2ac-e7dd-4537-95bd-d8335a64b010 name=/runtime.v1.RuntimeService/ListContainers
Mar 17 10:31:10 addons-415393 crio[663]: time="2025-03-17 10:31:10.173570274Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6bae4bb09b3c89072bcf1863d07458568db4ddef2f1b0a33e342ca67aaa49a70,PodSandboxId:d350a676fbe4111403169a7a227fcc2d6ba8ad71ef9bcc33e3ea0178f71cea09,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:4ff102c5d78d254a6f0da062b3cf39eaf07f01eec0927fd21e219d0af8bc0591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1ff4bb4faebcfb1f7e01144fa9904a570ab9bab88694457855feb6c6bba3fa07,State:CONTAINER_RUNNING,CreatedAt:1742207331061650517,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 61de065f-ddd0-4b74-9082-0b8df43235d4,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:123990ec40f013be142bd846b82c46c1cc4172e5ff28782cbc3d66d228edb75e,PodSandboxId:c3c4386a8cb5b0cb05e4b69eed86cb81a12df8c31110a55c8ad61c58d4c47c11,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1742207297023645327,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b9a790cb-3581-4f57-b8f4-7ee058bbaa8e,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c7335e2d78f0304917d7844abffd76d73f0a3685a3ad8465e71cf7bb41568a98,PodSandboxId:af0fa594b4be6ff1f184e018d58acb4b797bae42ce89d457e7bcae42f107956f,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:62b61c42ec8dd877b85c0aa24c4744ce44d274bc16cc5d2364edfe67964ba55b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ee44bc2368033ba6147d84fb376356de1e40e4778c20dd8b4817bd1636121ddf,State:CONTAINER_RUNNING,CreatedAt:1742207284823725907,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-56d7c84fd4-9l4gk,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: f1b85c14-1786-4285-813f-d595dd21ef1b,},Annotations:map[string]string{io.kubernetes.
container.hash: 4e8eee94,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:0b2afbff30b7ebd8b7970b4585cf70d03bdc070bfac639b7865a8cc11e18bbde,PodSandboxId:683bbebe62912c6f8e388f1e988c87d1f3e1893976a48af582f3213d7df3d541,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a62eeff05
ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,State:CONTAINER_EXITED,CreatedAt:1742207268600242179,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-2rdmh,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: bf743dff-77b5-4d49-88e4-415f84613004,},Annotations:map[string]string{io.kubernetes.container.hash: 3f610496,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a05e639f873efa8e82c4a64b6a8edd312053c6e48c0abac7c9cf0c4e204a536,PodSandboxId:639217ea03efa823311ad9b35cf8c69dbe1ac71239dd734fcb6926087cee0002,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f,Annotations:map[string]string{},UserSpecifiedImage:,
RuntimeHandler:,},ImageRef:a62eeff05ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,State:CONTAINER_EXITED,CreatedAt:1742207268455949464,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-m7dfw,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 75459ac2-1e92-4a13-8bb9-2f9d5a551f44,},Annotations:map[string]string{io.kubernetes.container.hash: fe18a2bf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97e24b5b7d38f300a5b7ad1a64d4da19e137838282a851b0a007ace1f7ea8a67,PodSandboxId:57ff7efabff1b83711be999246c71f4d663531b90c8c9c24d358a268360e4315,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotati
ons:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1742207264553183512,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-76f89f99b5-rvbxv,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: eedeec77-e940-4eec-9145-834f44745ef5,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f49382d81bb2dc4c22c720772fabdd8618e89cd8955f0e53517a46b1abe2cb7a,PodSandboxId:72ac138c24ac7ce03ae760172f293fe8523f224d7ed45759b65ded837b8f3419,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf227
4e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1742207236146287844,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-wwrf9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d8bbaa4-cc81-4d9b-8a32-241d468adc22,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1118983a8c0a63120c55182a92d49ea5ccb5b4424874c140f3cae1da81210b2f,PodSandboxId:aff730212f474b63ad9ffe8e0ee5ddc8da267f6d85bb2dbab529b2165d5c71c9,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-min
ikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:30dd67412fdea30479de8d5d9bf760870308d24d911c59ea1f1757f04c33cc29,State:CONTAINER_RUNNING,CreatedAt:1742207233384956199,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cdcc8a22-ed21-4e3a-a7be-cd1bf7035f08,},Annotations:map[string]string{io.kubernetes.container.hash: 8778d474,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7e9ed244614c2090e3d50a8f5c3f754560f80216cb988d4cd912a7e347903f0d,PodSandboxId:9f67d728d74f2a3cb3a42625e40b84d
4a5fcba22606f4f5884053ba6314f616a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1742207210383354933,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5d55183f-b056-4716-8c4a-ec30a50fc604,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:127c651d50a3530a0af291e829887863b471b99bd24db2f27ebbd7f18a5337af,PodSandboxId:7e015dc62804b0a3d4e049ba8acc81fcd4c98835399
7cc0af490a8278d4ddbe0,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1742207208345590259,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-z6pcb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b5987f43-3744-41ad-952e-d2dcdb1cb8fe,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePo
licy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:375af32f43777df35c5b162c2233c32c4684907de1ed4b0d67dcab46ea3ccf96,PodSandboxId:3dac110f6eca906e41e5c5ed4713ebd4ed807ee6ee4854ac69b0c83102d45877,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,State:CONTAINER_RUNNING,CreatedAt:1742207205162238967,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-s56k7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4cf85691-4d8b-4d73-ba24-40607a1b54fd,},Annotations:map[string]string{io.kubernetes.container.hash: b4fecc5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.termi
nationGracePeriod: 30,},},&Container{Id:b3a1d2c0c105892a3419add8a0ed666ef01691cebd89dba8d490d798cacea361,PodSandboxId:111d6c06751d8fda0ad39567373d5d615062d69d764927e3099f91f40f1ccf96,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,State:CONTAINER_RUNNING,CreatedAt:1742207194536497090,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-415393,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 874d910f90f236bc89cf67a06a77e29a,},Annotations:map[string]string{io.kubernetes.container.hash: 51692d3d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes
.pod.terminationGracePeriod: 30,},},&Container{Id:5fef9d758082723f5cbf9ff7c90cf93efab2f05f48f4c4847d94bb332f987039,PodSandboxId:3277f8e4586df9f4ea60493a432c9ef4d848f8ebe2964817b1adce8320238616,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,State:CONTAINER_RUNNING,CreatedAt:1742207194551492758,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-415393,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 256627e1bd6898bc9cd4dde0328a9c0b,},Annotations:map[string]string{io.kubernetes.container.hash: 7745040f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationG
racePeriod: 30,},},&Container{Id:05361b48ca463a59bd79e6e5b46a53f2d471057b93713bec1ff4b30a1c62a31c,PodSandboxId:2c4f713895503d2eb9e572454d4129a8ca218cb6c3f5c57fe9910759af5ad297,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,State:CONTAINER_RUNNING,CreatedAt:1742207194533068968,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-415393,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b53a615c0a5ba956c1e2b58fc9c3eccd,},Annotations:map[string]string{io.kubernetes.container.hash: 4c5aaea3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},
},&Container{Id:f968b7a1ffab7e9d952127555d249a79c4fc92cc68b1387304d60725412d7fa9,PodSandboxId:48c4343f09c19e3f0718f28523a4fb37a8505971c16c7686b4f93c12554df31f,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1742207194499062736,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-415393,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 88a442f110c04a73bcf49ee57a35ff2b,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74"
id=cc73e2ac-e7dd-4537-95bd-d8335a64b010 name=/runtime.v1.RuntimeService/ListContainers
Mar 17 10:31:10 addons-415393 crio[663]: time="2025-03-17 10:31:10.209352395Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=c4ea91f8-8ec3-4d88-b854-f6f4f88d07d4 name=/runtime.v1.RuntimeService/Version
Mar 17 10:31:10 addons-415393 crio[663]: time="2025-03-17 10:31:10.209488310Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=c4ea91f8-8ec3-4d88-b854-f6f4f88d07d4 name=/runtime.v1.RuntimeService/Version
Mar 17 10:31:10 addons-415393 crio[663]: time="2025-03-17 10:31:10.216907551Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=ec3adf04-d833-4fd1-ab3a-9ba5fda43dea name=/runtime.v1.ImageService/ImageFsInfo
Mar 17 10:31:10 addons-415393 crio[663]: time="2025-03-17 10:31:10.218237083Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1742207470218209894,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595375,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ec3adf04-d833-4fd1-ab3a-9ba5fda43dea name=/runtime.v1.ImageService/ImageFsInfo
Mar 17 10:31:10 addons-415393 crio[663]: time="2025-03-17 10:31:10.218968968Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=1384f4ba-f5ac-4ecc-a642-e77cdafa92b0 name=/runtime.v1.RuntimeService/ListContainers
Mar 17 10:31:10 addons-415393 crio[663]: time="2025-03-17 10:31:10.219046287Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1384f4ba-f5ac-4ecc-a642-e77cdafa92b0 name=/runtime.v1.RuntimeService/ListContainers
Mar 17 10:31:10 addons-415393 crio[663]: time="2025-03-17 10:31:10.219439677Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6bae4bb09b3c89072bcf1863d07458568db4ddef2f1b0a33e342ca67aaa49a70,PodSandboxId:d350a676fbe4111403169a7a227fcc2d6ba8ad71ef9bcc33e3ea0178f71cea09,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:4ff102c5d78d254a6f0da062b3cf39eaf07f01eec0927fd21e219d0af8bc0591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1ff4bb4faebcfb1f7e01144fa9904a570ab9bab88694457855feb6c6bba3fa07,State:CONTAINER_RUNNING,CreatedAt:1742207331061650517,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 61de065f-ddd0-4b74-9082-0b8df43235d4,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:123990ec40f013be142bd846b82c46c1cc4172e5ff28782cbc3d66d228edb75e,PodSandboxId:c3c4386a8cb5b0cb05e4b69eed86cb81a12df8c31110a55c8ad61c58d4c47c11,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1742207297023645327,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b9a790cb-3581-4f57-b8f4-7ee058bbaa8e,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c7335e2d78f0304917d7844abffd76d73f0a3685a3ad8465e71cf7bb41568a98,PodSandboxId:af0fa594b4be6ff1f184e018d58acb4b797bae42ce89d457e7bcae42f107956f,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:62b61c42ec8dd877b85c0aa24c4744ce44d274bc16cc5d2364edfe67964ba55b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ee44bc2368033ba6147d84fb376356de1e40e4778c20dd8b4817bd1636121ddf,State:CONTAINER_RUNNING,CreatedAt:1742207284823725907,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-56d7c84fd4-9l4gk,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: f1b85c14-1786-4285-813f-d595dd21ef1b,},Annotations:map[string]string{io.kubernetes.
container.hash: 4e8eee94,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:0b2afbff30b7ebd8b7970b4585cf70d03bdc070bfac639b7865a8cc11e18bbde,PodSandboxId:683bbebe62912c6f8e388f1e988c87d1f3e1893976a48af582f3213d7df3d541,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a62eeff05
ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,State:CONTAINER_EXITED,CreatedAt:1742207268600242179,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-2rdmh,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: bf743dff-77b5-4d49-88e4-415f84613004,},Annotations:map[string]string{io.kubernetes.container.hash: 3f610496,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a05e639f873efa8e82c4a64b6a8edd312053c6e48c0abac7c9cf0c4e204a536,PodSandboxId:639217ea03efa823311ad9b35cf8c69dbe1ac71239dd734fcb6926087cee0002,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f,Annotations:map[string]string{},UserSpecifiedImage:,
RuntimeHandler:,},ImageRef:a62eeff05ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,State:CONTAINER_EXITED,CreatedAt:1742207268455949464,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-m7dfw,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 75459ac2-1e92-4a13-8bb9-2f9d5a551f44,},Annotations:map[string]string{io.kubernetes.container.hash: fe18a2bf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97e24b5b7d38f300a5b7ad1a64d4da19e137838282a851b0a007ace1f7ea8a67,PodSandboxId:57ff7efabff1b83711be999246c71f4d663531b90c8c9c24d358a268360e4315,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotati
ons:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1742207264553183512,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-76f89f99b5-rvbxv,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: eedeec77-e940-4eec-9145-834f44745ef5,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f49382d81bb2dc4c22c720772fabdd8618e89cd8955f0e53517a46b1abe2cb7a,PodSandboxId:72ac138c24ac7ce03ae760172f293fe8523f224d7ed45759b65ded837b8f3419,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf227
4e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1742207236146287844,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-wwrf9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d8bbaa4-cc81-4d9b-8a32-241d468adc22,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1118983a8c0a63120c55182a92d49ea5ccb5b4424874c140f3cae1da81210b2f,PodSandboxId:aff730212f474b63ad9ffe8e0ee5ddc8da267f6d85bb2dbab529b2165d5c71c9,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-min
ikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:30dd67412fdea30479de8d5d9bf760870308d24d911c59ea1f1757f04c33cc29,State:CONTAINER_RUNNING,CreatedAt:1742207233384956199,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cdcc8a22-ed21-4e3a-a7be-cd1bf7035f08,},Annotations:map[string]string{io.kubernetes.container.hash: 8778d474,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7e9ed244614c2090e3d50a8f5c3f754560f80216cb988d4cd912a7e347903f0d,PodSandboxId:9f67d728d74f2a3cb3a42625e40b84d
4a5fcba22606f4f5884053ba6314f616a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1742207210383354933,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5d55183f-b056-4716-8c4a-ec30a50fc604,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:127c651d50a3530a0af291e829887863b471b99bd24db2f27ebbd7f18a5337af,PodSandboxId:7e015dc62804b0a3d4e049ba8acc81fcd4c98835399
7cc0af490a8278d4ddbe0,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1742207208345590259,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-z6pcb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b5987f43-3744-41ad-952e-d2dcdb1cb8fe,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePo
licy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:375af32f43777df35c5b162c2233c32c4684907de1ed4b0d67dcab46ea3ccf96,PodSandboxId:3dac110f6eca906e41e5c5ed4713ebd4ed807ee6ee4854ac69b0c83102d45877,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,State:CONTAINER_RUNNING,CreatedAt:1742207205162238967,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-s56k7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4cf85691-4d8b-4d73-ba24-40607a1b54fd,},Annotations:map[string]string{io.kubernetes.container.hash: b4fecc5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.termi
nationGracePeriod: 30,},},&Container{Id:b3a1d2c0c105892a3419add8a0ed666ef01691cebd89dba8d490d798cacea361,PodSandboxId:111d6c06751d8fda0ad39567373d5d615062d69d764927e3099f91f40f1ccf96,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,State:CONTAINER_RUNNING,CreatedAt:1742207194536497090,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-415393,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 874d910f90f236bc89cf67a06a77e29a,},Annotations:map[string]string{io.kubernetes.container.hash: 51692d3d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes
.pod.terminationGracePeriod: 30,},},&Container{Id:5fef9d758082723f5cbf9ff7c90cf93efab2f05f48f4c4847d94bb332f987039,PodSandboxId:3277f8e4586df9f4ea60493a432c9ef4d848f8ebe2964817b1adce8320238616,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,State:CONTAINER_RUNNING,CreatedAt:1742207194551492758,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-415393,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 256627e1bd6898bc9cd4dde0328a9c0b,},Annotations:map[string]string{io.kubernetes.container.hash: 7745040f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationG
racePeriod: 30,},},&Container{Id:05361b48ca463a59bd79e6e5b46a53f2d471057b93713bec1ff4b30a1c62a31c,PodSandboxId:2c4f713895503d2eb9e572454d4129a8ca218cb6c3f5c57fe9910759af5ad297,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,State:CONTAINER_RUNNING,CreatedAt:1742207194533068968,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-415393,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b53a615c0a5ba956c1e2b58fc9c3eccd,},Annotations:map[string]string{io.kubernetes.container.hash: 4c5aaea3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},
},&Container{Id:f968b7a1ffab7e9d952127555d249a79c4fc92cc68b1387304d60725412d7fa9,PodSandboxId:48c4343f09c19e3f0718f28523a4fb37a8505971c16c7686b4f93c12554df31f,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1742207194499062736,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-415393,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 88a442f110c04a73bcf49ee57a35ff2b,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74"
id=1384f4ba-f5ac-4ecc-a642-e77cdafa92b0 name=/runtime.v1.RuntimeService/ListContainers
Mar 17 10:31:10 addons-415393 crio[663]: time="2025-03-17 10:31:10.253643943Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=9d0ce092-a760-41df-b2c1-2a6065503186 name=/runtime.v1.RuntimeService/Version
Mar 17 10:31:10 addons-415393 crio[663]: time="2025-03-17 10:31:10.253732325Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=9d0ce092-a760-41df-b2c1-2a6065503186 name=/runtime.v1.RuntimeService/Version
Mar 17 10:31:10 addons-415393 crio[663]: time="2025-03-17 10:31:10.255126162Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=c9ea3d94-7944-47d0-b3fd-cb639608d42c name=/runtime.v1.ImageService/ImageFsInfo
Mar 17 10:31:10 addons-415393 crio[663]: time="2025-03-17 10:31:10.256290134Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1742207470256265324,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595375,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c9ea3d94-7944-47d0-b3fd-cb639608d42c name=/runtime.v1.ImageService/ImageFsInfo
Mar 17 10:31:10 addons-415393 crio[663]: time="2025-03-17 10:31:10.256839728Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=9bfb708f-8715-469f-92b7-3f08bd3b9dbe name=/runtime.v1.RuntimeService/ListContainers
Mar 17 10:31:10 addons-415393 crio[663]: time="2025-03-17 10:31:10.256901644Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=9bfb708f-8715-469f-92b7-3f08bd3b9dbe name=/runtime.v1.RuntimeService/ListContainers
Mar 17 10:31:10 addons-415393 crio[663]: time="2025-03-17 10:31:10.257197283Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6bae4bb09b3c89072bcf1863d07458568db4ddef2f1b0a33e342ca67aaa49a70,PodSandboxId:d350a676fbe4111403169a7a227fcc2d6ba8ad71ef9bcc33e3ea0178f71cea09,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:4ff102c5d78d254a6f0da062b3cf39eaf07f01eec0927fd21e219d0af8bc0591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1ff4bb4faebcfb1f7e01144fa9904a570ab9bab88694457855feb6c6bba3fa07,State:CONTAINER_RUNNING,CreatedAt:1742207331061650517,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 61de065f-ddd0-4b74-9082-0b8df43235d4,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:123990ec40f013be142bd846b82c46c1cc4172e5ff28782cbc3d66d228edb75e,PodSandboxId:c3c4386a8cb5b0cb05e4b69eed86cb81a12df8c31110a55c8ad61c58d4c47c11,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1742207297023645327,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b9a790cb-3581-4f57-b8f4-7ee058bbaa8e,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c7335e2d78f0304917d7844abffd76d73f0a3685a3ad8465e71cf7bb41568a98,PodSandboxId:af0fa594b4be6ff1f184e018d58acb4b797bae42ce89d457e7bcae42f107956f,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:62b61c42ec8dd877b85c0aa24c4744ce44d274bc16cc5d2364edfe67964ba55b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ee44bc2368033ba6147d84fb376356de1e40e4778c20dd8b4817bd1636121ddf,State:CONTAINER_RUNNING,CreatedAt:1742207284823725907,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-56d7c84fd4-9l4gk,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: f1b85c14-1786-4285-813f-d595dd21ef1b,},Annotations:map[string]string{io.kubernetes.
container.hash: 4e8eee94,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:0b2afbff30b7ebd8b7970b4585cf70d03bdc070bfac639b7865a8cc11e18bbde,PodSandboxId:683bbebe62912c6f8e388f1e988c87d1f3e1893976a48af582f3213d7df3d541,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a62eeff05
ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,State:CONTAINER_EXITED,CreatedAt:1742207268600242179,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-2rdmh,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: bf743dff-77b5-4d49-88e4-415f84613004,},Annotations:map[string]string{io.kubernetes.container.hash: 3f610496,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a05e639f873efa8e82c4a64b6a8edd312053c6e48c0abac7c9cf0c4e204a536,PodSandboxId:639217ea03efa823311ad9b35cf8c69dbe1ac71239dd734fcb6926087cee0002,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f,Annotations:map[string]string{},UserSpecifiedImage:,
RuntimeHandler:,},ImageRef:a62eeff05ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,State:CONTAINER_EXITED,CreatedAt:1742207268455949464,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-m7dfw,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 75459ac2-1e92-4a13-8bb9-2f9d5a551f44,},Annotations:map[string]string{io.kubernetes.container.hash: fe18a2bf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97e24b5b7d38f300a5b7ad1a64d4da19e137838282a851b0a007ace1f7ea8a67,PodSandboxId:57ff7efabff1b83711be999246c71f4d663531b90c8c9c24d358a268360e4315,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotati
ons:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1742207264553183512,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-76f89f99b5-rvbxv,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: eedeec77-e940-4eec-9145-834f44745ef5,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f49382d81bb2dc4c22c720772fabdd8618e89cd8955f0e53517a46b1abe2cb7a,PodSandboxId:72ac138c24ac7ce03ae760172f293fe8523f224d7ed45759b65ded837b8f3419,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf227
4e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1742207236146287844,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-wwrf9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d8bbaa4-cc81-4d9b-8a32-241d468adc22,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1118983a8c0a63120c55182a92d49ea5ccb5b4424874c140f3cae1da81210b2f,PodSandboxId:aff730212f474b63ad9ffe8e0ee5ddc8da267f6d85bb2dbab529b2165d5c71c9,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-min
ikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:30dd67412fdea30479de8d5d9bf760870308d24d911c59ea1f1757f04c33cc29,State:CONTAINER_RUNNING,CreatedAt:1742207233384956199,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cdcc8a22-ed21-4e3a-a7be-cd1bf7035f08,},Annotations:map[string]string{io.kubernetes.container.hash: 8778d474,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7e9ed244614c2090e3d50a8f5c3f754560f80216cb988d4cd912a7e347903f0d,PodSandboxId:9f67d728d74f2a3cb3a42625e40b84d
4a5fcba22606f4f5884053ba6314f616a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1742207210383354933,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5d55183f-b056-4716-8c4a-ec30a50fc604,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:127c651d50a3530a0af291e829887863b471b99bd24db2f27ebbd7f18a5337af,PodSandboxId:7e015dc62804b0a3d4e049ba8acc81fcd4c98835399
7cc0af490a8278d4ddbe0,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1742207208345590259,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-z6pcb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b5987f43-3744-41ad-952e-d2dcdb1cb8fe,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePo
licy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:375af32f43777df35c5b162c2233c32c4684907de1ed4b0d67dcab46ea3ccf96,PodSandboxId:3dac110f6eca906e41e5c5ed4713ebd4ed807ee6ee4854ac69b0c83102d45877,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,State:CONTAINER_RUNNING,CreatedAt:1742207205162238967,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-s56k7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4cf85691-4d8b-4d73-ba24-40607a1b54fd,},Annotations:map[string]string{io.kubernetes.container.hash: b4fecc5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.termi
nationGracePeriod: 30,},},&Container{Id:b3a1d2c0c105892a3419add8a0ed666ef01691cebd89dba8d490d798cacea361,PodSandboxId:111d6c06751d8fda0ad39567373d5d615062d69d764927e3099f91f40f1ccf96,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,State:CONTAINER_RUNNING,CreatedAt:1742207194536497090,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-415393,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 874d910f90f236bc89cf67a06a77e29a,},Annotations:map[string]string{io.kubernetes.container.hash: 51692d3d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes
.pod.terminationGracePeriod: 30,},},&Container{Id:5fef9d758082723f5cbf9ff7c90cf93efab2f05f48f4c4847d94bb332f987039,PodSandboxId:3277f8e4586df9f4ea60493a432c9ef4d848f8ebe2964817b1adce8320238616,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,State:CONTAINER_RUNNING,CreatedAt:1742207194551492758,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-415393,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 256627e1bd6898bc9cd4dde0328a9c0b,},Annotations:map[string]string{io.kubernetes.container.hash: 7745040f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationG
racePeriod: 30,},},&Container{Id:05361b48ca463a59bd79e6e5b46a53f2d471057b93713bec1ff4b30a1c62a31c,PodSandboxId:2c4f713895503d2eb9e572454d4129a8ca218cb6c3f5c57fe9910759af5ad297,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,State:CONTAINER_RUNNING,CreatedAt:1742207194533068968,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-415393,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b53a615c0a5ba956c1e2b58fc9c3eccd,},Annotations:map[string]string{io.kubernetes.container.hash: 4c5aaea3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},
},&Container{Id:f968b7a1ffab7e9d952127555d249a79c4fc92cc68b1387304d60725412d7fa9,PodSandboxId:48c4343f09c19e3f0718f28523a4fb37a8505971c16c7686b4f93c12554df31f,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1742207194499062736,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-415393,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 88a442f110c04a73bcf49ee57a35ff2b,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74"
id=9bfb708f-8715-469f-92b7-3f08bd3b9dbe name=/runtime.v1.RuntimeService/ListContainers
Mar 17 10:31:10 addons-415393 crio[663]: time="2025-03-17 10:31:10.293819282Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=c86fb8ac-8ab0-4835-b3be-ceb702ecae4a name=/runtime.v1.RuntimeService/Version
Mar 17 10:31:10 addons-415393 crio[663]: time="2025-03-17 10:31:10.293909371Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=c86fb8ac-8ab0-4835-b3be-ceb702ecae4a name=/runtime.v1.RuntimeService/Version
Mar 17 10:31:10 addons-415393 crio[663]: time="2025-03-17 10:31:10.294955353Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=425c9262-f563-40c4-adfb-ebbfc5b5abc5 name=/runtime.v1.ImageService/ImageFsInfo
Mar 17 10:31:10 addons-415393 crio[663]: time="2025-03-17 10:31:10.296125115Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1742207470296100108,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595375,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=425c9262-f563-40c4-adfb-ebbfc5b5abc5 name=/runtime.v1.ImageService/ImageFsInfo
Mar 17 10:31:10 addons-415393 crio[663]: time="2025-03-17 10:31:10.296733476Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4bbbca44-511d-4b1d-82fd-7eb130915ca8 name=/runtime.v1.RuntimeService/ListContainers
Mar 17 10:31:10 addons-415393 crio[663]: time="2025-03-17 10:31:10.296785187Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4bbbca44-511d-4b1d-82fd-7eb130915ca8 name=/runtime.v1.RuntimeService/ListContainers
Mar 17 10:31:10 addons-415393 crio[663]: time="2025-03-17 10:31:10.297085563Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6bae4bb09b3c89072bcf1863d07458568db4ddef2f1b0a33e342ca67aaa49a70,PodSandboxId:d350a676fbe4111403169a7a227fcc2d6ba8ad71ef9bcc33e3ea0178f71cea09,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:4ff102c5d78d254a6f0da062b3cf39eaf07f01eec0927fd21e219d0af8bc0591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1ff4bb4faebcfb1f7e01144fa9904a570ab9bab88694457855feb6c6bba3fa07,State:CONTAINER_RUNNING,CreatedAt:1742207331061650517,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 61de065f-ddd0-4b74-9082-0b8df43235d4,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:123990ec40f013be142bd846b82c46c1cc4172e5ff28782cbc3d66d228edb75e,PodSandboxId:c3c4386a8cb5b0cb05e4b69eed86cb81a12df8c31110a55c8ad61c58d4c47c11,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1742207297023645327,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b9a790cb-3581-4f57-b8f4-7ee058bbaa8e,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c7335e2d78f0304917d7844abffd76d73f0a3685a3ad8465e71cf7bb41568a98,PodSandboxId:af0fa594b4be6ff1f184e018d58acb4b797bae42ce89d457e7bcae42f107956f,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:62b61c42ec8dd877b85c0aa24c4744ce44d274bc16cc5d2364edfe67964ba55b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ee44bc2368033ba6147d84fb376356de1e40e4778c20dd8b4817bd1636121ddf,State:CONTAINER_RUNNING,CreatedAt:1742207284823725907,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-56d7c84fd4-9l4gk,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: f1b85c14-1786-4285-813f-d595dd21ef1b,},Annotations:map[string]string{io.kubernetes.
container.hash: 4e8eee94,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:0b2afbff30b7ebd8b7970b4585cf70d03bdc070bfac639b7865a8cc11e18bbde,PodSandboxId:683bbebe62912c6f8e388f1e988c87d1f3e1893976a48af582f3213d7df3d541,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a62eeff05
ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,State:CONTAINER_EXITED,CreatedAt:1742207268600242179,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-2rdmh,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: bf743dff-77b5-4d49-88e4-415f84613004,},Annotations:map[string]string{io.kubernetes.container.hash: 3f610496,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a05e639f873efa8e82c4a64b6a8edd312053c6e48c0abac7c9cf0c4e204a536,PodSandboxId:639217ea03efa823311ad9b35cf8c69dbe1ac71239dd734fcb6926087cee0002,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f,Annotations:map[string]string{},UserSpecifiedImage:,
RuntimeHandler:,},ImageRef:a62eeff05ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,State:CONTAINER_EXITED,CreatedAt:1742207268455949464,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-m7dfw,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 75459ac2-1e92-4a13-8bb9-2f9d5a551f44,},Annotations:map[string]string{io.kubernetes.container.hash: fe18a2bf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97e24b5b7d38f300a5b7ad1a64d4da19e137838282a851b0a007ace1f7ea8a67,PodSandboxId:57ff7efabff1b83711be999246c71f4d663531b90c8c9c24d358a268360e4315,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotati
ons:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1742207264553183512,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-76f89f99b5-rvbxv,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: eedeec77-e940-4eec-9145-834f44745ef5,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f49382d81bb2dc4c22c720772fabdd8618e89cd8955f0e53517a46b1abe2cb7a,PodSandboxId:72ac138c24ac7ce03ae760172f293fe8523f224d7ed45759b65ded837b8f3419,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf227
4e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1742207236146287844,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-wwrf9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d8bbaa4-cc81-4d9b-8a32-241d468adc22,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1118983a8c0a63120c55182a92d49ea5ccb5b4424874c140f3cae1da81210b2f,PodSandboxId:aff730212f474b63ad9ffe8e0ee5ddc8da267f6d85bb2dbab529b2165d5c71c9,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-min
ikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:30dd67412fdea30479de8d5d9bf760870308d24d911c59ea1f1757f04c33cc29,State:CONTAINER_RUNNING,CreatedAt:1742207233384956199,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cdcc8a22-ed21-4e3a-a7be-cd1bf7035f08,},Annotations:map[string]string{io.kubernetes.container.hash: 8778d474,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7e9ed244614c2090e3d50a8f5c3f754560f80216cb988d4cd912a7e347903f0d,PodSandboxId:9f67d728d74f2a3cb3a42625e40b84d
4a5fcba22606f4f5884053ba6314f616a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1742207210383354933,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5d55183f-b056-4716-8c4a-ec30a50fc604,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:127c651d50a3530a0af291e829887863b471b99bd24db2f27ebbd7f18a5337af,PodSandboxId:7e015dc62804b0a3d4e049ba8acc81fcd4c98835399
7cc0af490a8278d4ddbe0,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1742207208345590259,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-z6pcb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b5987f43-3744-41ad-952e-d2dcdb1cb8fe,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePo
licy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:375af32f43777df35c5b162c2233c32c4684907de1ed4b0d67dcab46ea3ccf96,PodSandboxId:3dac110f6eca906e41e5c5ed4713ebd4ed807ee6ee4854ac69b0c83102d45877,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,State:CONTAINER_RUNNING,CreatedAt:1742207205162238967,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-s56k7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4cf85691-4d8b-4d73-ba24-40607a1b54fd,},Annotations:map[string]string{io.kubernetes.container.hash: b4fecc5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.termi
nationGracePeriod: 30,},},&Container{Id:b3a1d2c0c105892a3419add8a0ed666ef01691cebd89dba8d490d798cacea361,PodSandboxId:111d6c06751d8fda0ad39567373d5d615062d69d764927e3099f91f40f1ccf96,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,State:CONTAINER_RUNNING,CreatedAt:1742207194536497090,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-415393,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 874d910f90f236bc89cf67a06a77e29a,},Annotations:map[string]string{io.kubernetes.container.hash: 51692d3d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes
.pod.terminationGracePeriod: 30,},},&Container{Id:5fef9d758082723f5cbf9ff7c90cf93efab2f05f48f4c4847d94bb332f987039,PodSandboxId:3277f8e4586df9f4ea60493a432c9ef4d848f8ebe2964817b1adce8320238616,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,State:CONTAINER_RUNNING,CreatedAt:1742207194551492758,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-415393,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 256627e1bd6898bc9cd4dde0328a9c0b,},Annotations:map[string]string{io.kubernetes.container.hash: 7745040f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationG
racePeriod: 30,},},&Container{Id:05361b48ca463a59bd79e6e5b46a53f2d471057b93713bec1ff4b30a1c62a31c,PodSandboxId:2c4f713895503d2eb9e572454d4129a8ca218cb6c3f5c57fe9910759af5ad297,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,State:CONTAINER_RUNNING,CreatedAt:1742207194533068968,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-415393,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b53a615c0a5ba956c1e2b58fc9c3eccd,},Annotations:map[string]string{io.kubernetes.container.hash: 4c5aaea3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},
},&Container{Id:f968b7a1ffab7e9d952127555d249a79c4fc92cc68b1387304d60725412d7fa9,PodSandboxId:48c4343f09c19e3f0718f28523a4fb37a8505971c16c7686b4f93c12554df31f,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1742207194499062736,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-415393,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 88a442f110c04a73bcf49ee57a35ff2b,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74"
id=4bbbca44-511d-4b1d-82fd-7eb130915ca8 name=/runtime.v1.RuntimeService/ListContainers
==> container status <==
CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID POD
6bae4bb09b3c8 docker.io/library/nginx@sha256:4ff102c5d78d254a6f0da062b3cf39eaf07f01eec0927fd21e219d0af8bc0591 2 minutes ago Running nginx 0 d350a676fbe41 nginx
123990ec40f01 gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e 2 minutes ago Running busybox 0 c3c4386a8cb5b busybox
c7335e2d78f03 registry.k8s.io/ingress-nginx/controller@sha256:62b61c42ec8dd877b85c0aa24c4744ce44d274bc16cc5d2364edfe67964ba55b 3 minutes ago Running controller 0 af0fa594b4be6 ingress-nginx-controller-56d7c84fd4-9l4gk
0b2afbff30b7e registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f 3 minutes ago Exited patch 0 683bbebe62912 ingress-nginx-admission-patch-2rdmh
9a05e639f873e registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f 3 minutes ago Exited create 0 639217ea03efa ingress-nginx-admission-create-m7dfw
97e24b5b7d38f docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef 3 minutes ago Running local-path-provisioner 0 57ff7efabff1b local-path-provisioner-76f89f99b5-rvbxv
f49382d81bb2d docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f 3 minutes ago Running amd-gpu-device-plugin 0 72ac138c24ac7 amd-gpu-device-plugin-wwrf9
1118983a8c0a6 gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab 3 minutes ago Running minikube-ingress-dns 0 aff730212f474 kube-ingress-dns-minikube
7e9ed244614c2 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562 4 minutes ago Running storage-provisioner 0 9f67d728d74f2 storage-provisioner
127c651d50a35 c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6 4 minutes ago Running coredns 0 7e015dc62804b coredns-668d6bf9bc-z6pcb
375af32f43777 f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5 4 minutes ago Running kube-proxy 0 3dac110f6eca9 kube-proxy-s56k7
5fef9d7580827 85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef 4 minutes ago Running kube-apiserver 0 3277f8e4586df kube-apiserver-addons-415393
b3a1d2c0c1058 b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389 4 minutes ago Running kube-controller-manager 0 111d6c06751d8 kube-controller-manager-addons-415393
05361b48ca463 d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d 4 minutes ago Running kube-scheduler 0 2c4f713895503 kube-scheduler-addons-415393
f968b7a1ffab7 a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc 4 minutes ago Running etcd 0 48c4343f09c19 etcd-addons-415393
==> coredns [127c651d50a3530a0af291e829887863b471b99bd24db2f27ebbd7f18a5337af] <==
[INFO] 10.244.0.8:47073 - 60956 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 109 false 1232" NXDOMAIN qr,aa,rd 179 0.000188943s
[INFO] 10.244.0.8:47073 - 15768 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000131476s
[INFO] 10.244.0.8:47073 - 24374 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000209497s
[INFO] 10.244.0.8:47073 - 37565 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000159447s
[INFO] 10.244.0.8:47073 - 64520 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000088574s
[INFO] 10.244.0.8:47073 - 45123 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000186962s
[INFO] 10.244.0.8:47073 - 21403 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000085469s
[INFO] 10.244.0.8:35960 - 12885 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000133593s
[INFO] 10.244.0.8:35960 - 13147 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000063996s
[INFO] 10.244.0.8:59156 - 58454 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000077036s
[INFO] 10.244.0.8:59156 - 58699 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.00005555s
[INFO] 10.244.0.8:46026 - 59526 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000067878s
[INFO] 10.244.0.8:46026 - 59734 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000111727s
[INFO] 10.244.0.8:54048 - 60316 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000083401s
[INFO] 10.244.0.8:54048 - 60081 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000117477s
[INFO] 10.244.0.23:47509 - 62379 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000494978s
[INFO] 10.244.0.23:56648 - 47964 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000164873s
[INFO] 10.244.0.23:59505 - 19843 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000211265s
[INFO] 10.244.0.23:51330 - 10420 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000102075s
[INFO] 10.244.0.23:46388 - 22879 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000064096s
[INFO] 10.244.0.23:57771 - 12632 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000076859s
[INFO] 10.244.0.23:38974 - 3226 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.001571821s
[INFO] 10.244.0.23:50229 - 40976 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 648 0.000710955s
[INFO] 10.244.0.26:36602 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000422604s
[INFO] 10.244.0.26:35726 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000202065s
==> describe nodes <==
Name: addons-415393
Roles: control-plane
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/arch=amd64
kubernetes.io/hostname=addons-415393
kubernetes.io/os=linux
minikube.k8s.io/commit=28b3ce799b018a38b7c40f89b465976263272e76
minikube.k8s.io/name=addons-415393
minikube.k8s.io/primary=true
minikube.k8s.io/updated_at=2025_03_17T10_26_40_0700
minikube.k8s.io/version=v1.35.0
node-role.kubernetes.io/control-plane=
node.kubernetes.io/exclude-from-external-load-balancers=
topology.hostpath.csi/node=addons-415393
Annotations: kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Mon, 17 Mar 2025 10:26:36 +0000
Taints: <none>
Unschedulable: false
Lease:
HolderIdentity: addons-415393
AcquireTime: <unset>
RenewTime: Mon, 17 Mar 2025 10:31:07 +0000
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
MemoryPressure False Mon, 17 Mar 2025 10:29:13 +0000 Mon, 17 Mar 2025 10:26:35 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Mon, 17 Mar 2025 10:29:13 +0000 Mon, 17 Mar 2025 10:26:35 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Mon, 17 Mar 2025 10:29:13 +0000 Mon, 17 Mar 2025 10:26:35 +0000 KubeletHasSufficientPID kubelet has sufficient PID available
Ready True Mon, 17 Mar 2025 10:29:13 +0000 Mon, 17 Mar 2025 10:26:40 +0000 KubeletReady kubelet is posting ready status
Addresses:
InternalIP: 192.168.39.132
Hostname: addons-415393
Capacity:
cpu: 2
ephemeral-storage: 17734596Ki
hugepages-2Mi: 0
memory: 3912780Ki
pods: 110
Allocatable:
cpu: 2
ephemeral-storage: 17734596Ki
hugepages-2Mi: 0
memory: 3912780Ki
pods: 110
System Info:
Machine ID: a13acad8454041b5b5b0189041dee06c
System UUID: a13acad8-4540-41b5-b5b0-189041dee06c
Boot ID: f68c1a69-dd05-4810-a436-6cc004b5dec8
Kernel Version: 5.10.207
OS Image: Buildroot 2023.02.9
Operating System: linux
Architecture: amd64
Container Runtime Version: cri-o://1.29.1
Kubelet Version: v1.32.2
Kube-Proxy Version: v1.32.2
PodCIDR: 10.244.0.0/24
PodCIDRs: 10.244.0.0/24
Non-terminated Pods: (14 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age
--------- ---- ------------ ---------- --------------- ------------- ---
default busybox 0 (0%) 0 (0%) 0 (0%) 0 (0%) 2m57s
default hello-world-app-7d9564db4-dpnkv 0 (0%) 0 (0%) 0 (0%) 0 (0%) 2s
default nginx 0 (0%) 0 (0%) 0 (0%) 0 (0%) 2m25s
ingress-nginx ingress-nginx-controller-56d7c84fd4-9l4gk 100m (5%) 0 (0%) 90Mi (2%) 0 (0%) 4m18s
kube-system amd-gpu-device-plugin-wwrf9 0 (0%) 0 (0%) 0 (0%) 0 (0%) 4m24s
kube-system coredns-668d6bf9bc-z6pcb 100m (5%) 0 (0%) 70Mi (1%) 170Mi (4%) 4m26s
kube-system etcd-addons-415393 100m (5%) 0 (0%) 100Mi (2%) 0 (0%) 4m31s
kube-system kube-apiserver-addons-415393 250m (12%) 0 (0%) 0 (0%) 0 (0%) 4m31s
kube-system kube-controller-manager-addons-415393 200m (10%) 0 (0%) 0 (0%) 0 (0%) 4m31s
kube-system kube-ingress-dns-minikube 0 (0%) 0 (0%) 0 (0%) 0 (0%) 4m22s
kube-system kube-proxy-s56k7 0 (0%) 0 (0%) 0 (0%) 0 (0%) 4m26s
kube-system kube-scheduler-addons-415393 100m (5%) 0 (0%) 0 (0%) 0 (0%) 4m31s
kube-system storage-provisioner 0 (0%) 0 (0%) 0 (0%) 0 (0%) 4m21s
local-path-storage local-path-provisioner-76f89f99b5-rvbxv 0 (0%) 0 (0%) 0 (0%) 0 (0%) 4m21s
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 850m (42%) 0 (0%)
memory 260Mi (6%) 170Mi (4%)
ephemeral-storage 0 (0%) 0 (0%)
hugepages-2Mi 0 (0%) 0 (0%)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Starting 4m24s kube-proxy
Normal Starting 4m37s kubelet Starting kubelet.
Normal NodeHasSufficientMemory 4m37s (x8 over 4m37s) kubelet Node addons-415393 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 4m37s (x8 over 4m37s) kubelet Node addons-415393 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 4m37s (x7 over 4m37s) kubelet Node addons-415393 status is now: NodeHasSufficientPID
Normal NodeAllocatableEnforced 4m37s kubelet Updated Node Allocatable limit across pods
Normal Starting 4m31s kubelet Starting kubelet.
Normal NodeAllocatableEnforced 4m31s kubelet Updated Node Allocatable limit across pods
Normal NodeHasSufficientMemory 4m31s kubelet Node addons-415393 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 4m31s kubelet Node addons-415393 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 4m31s kubelet Node addons-415393 status is now: NodeHasSufficientPID
Normal NodeReady 4m30s kubelet Node addons-415393 status is now: NodeReady
Normal RegisteredNode 4m27s node-controller Node addons-415393 event: Registered Node addons-415393 in Controller
==> dmesg <==
[ +0.076048] kauditd_printk_skb: 69 callbacks suppressed
[ +4.773188] systemd-fstab-generator[1349]: Ignoring "noauto" option for root device
[ +0.394012] kauditd_printk_skb: 46 callbacks suppressed
[ +5.066973] kauditd_printk_skb: 107 callbacks suppressed
[ +5.050700] kauditd_printk_skb: 162 callbacks suppressed
[Mar17 10:27] kauditd_printk_skb: 43 callbacks suppressed
[ +29.322448] kauditd_printk_skb: 2 callbacks suppressed
[ +5.127209] kauditd_printk_skb: 22 callbacks suppressed
[ +6.128047] kauditd_printk_skb: 11 callbacks suppressed
[ +5.752299] kauditd_printk_skb: 31 callbacks suppressed
[ +5.005500] kauditd_printk_skb: 46 callbacks suppressed
[Mar17 10:28] kauditd_printk_skb: 14 callbacks suppressed
[ +5.445119] kauditd_printk_skb: 9 callbacks suppressed
[ +6.759715] kauditd_printk_skb: 9 callbacks suppressed
[ +16.600050] kauditd_printk_skb: 2 callbacks suppressed
[ +6.290584] kauditd_printk_skb: 6 callbacks suppressed
[ +5.488753] kauditd_printk_skb: 34 callbacks suppressed
[ +5.639987] kauditd_printk_skb: 48 callbacks suppressed
[ +5.001329] kauditd_printk_skb: 7 callbacks suppressed
[Mar17 10:29] kauditd_printk_skb: 39 callbacks suppressed
[ +5.618351] kauditd_printk_skb: 9 callbacks suppressed
[ +5.759150] kauditd_printk_skb: 32 callbacks suppressed
[ +11.855047] kauditd_printk_skb: 6 callbacks suppressed
[ +6.889256] kauditd_printk_skb: 7 callbacks suppressed
[Mar17 10:31] kauditd_printk_skb: 49 callbacks suppressed
==> etcd [f968b7a1ffab7e9d952127555d249a79c4fc92cc68b1387304d60725412d7fa9] <==
{"level":"warn","ts":"2025-03-17T10:28:02.101956Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"227.506152ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
{"level":"info","ts":"2025-03-17T10:28:02.102137Z","caller":"traceutil/trace.go:171","msg":"trace[664608469] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1103; }","duration":"227.767879ms","start":"2025-03-17T10:28:01.874357Z","end":"2025-03-17T10:28:02.102124Z","steps":["trace[664608469] 'agreement among raft nodes before linearized reading' (duration: 224.13215ms)"],"step_count":1}
{"level":"info","ts":"2025-03-17T10:28:39.771745Z","caller":"traceutil/trace.go:171","msg":"trace[633217578] linearizableReadLoop","detail":"{readStateIndex:1351; appliedIndex:1350; }","duration":"135.089992ms","start":"2025-03-17T10:28:39.636631Z","end":"2025-03-17T10:28:39.771721Z","steps":["trace[633217578] 'read index received' (duration: 134.929634ms)","trace[633217578] 'applied index is now lower than readState.Index' (duration: 159.942µs)"],"step_count":2}
{"level":"info","ts":"2025-03-17T10:28:39.772077Z","caller":"traceutil/trace.go:171","msg":"trace[1017800462] transaction","detail":"{read_only:false; response_revision:1306; number_of_response:1; }","duration":"234.809495ms","start":"2025-03-17T10:28:39.537255Z","end":"2025-03-17T10:28:39.772065Z","steps":["trace[1017800462] 'process raft request' (duration: 234.348806ms)"],"step_count":1}
{"level":"warn","ts":"2025-03-17T10:28:39.772294Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"135.661569ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/apiregistration.k8s.io/apiservices/v1beta1.metrics.k8s.io\" limit:1 ","response":"range_response_count:1 size:2270"}
{"level":"info","ts":"2025-03-17T10:28:39.772326Z","caller":"traceutil/trace.go:171","msg":"trace[1243877288] range","detail":"{range_begin:/registry/apiregistration.k8s.io/apiservices/v1beta1.metrics.k8s.io; range_end:; response_count:1; response_revision:1306; }","duration":"135.721439ms","start":"2025-03-17T10:28:39.636591Z","end":"2025-03-17T10:28:39.772312Z","steps":["trace[1243877288] 'agreement among raft nodes before linearized reading' (duration: 135.614521ms)"],"step_count":1}
{"level":"info","ts":"2025-03-17T10:28:40.179887Z","caller":"traceutil/trace.go:171","msg":"trace[1112948213] transaction","detail":"{read_only:false; number_of_response:1; response_revision:1338; }","duration":"140.93812ms","start":"2025-03-17T10:28:40.038932Z","end":"2025-03-17T10:28:40.179870Z","steps":["trace[1112948213] 'process raft request' (duration: 48.89688ms)","trace[1112948213] 'compare' (duration: 91.625027ms)"],"step_count":2}
{"level":"info","ts":"2025-03-17T10:28:40.179977Z","caller":"traceutil/trace.go:171","msg":"trace[667514042] linearizableReadLoop","detail":"{readStateIndex:1386; appliedIndex:1383; }","duration":"136.197975ms","start":"2025-03-17T10:28:40.043729Z","end":"2025-03-17T10:28:40.179927Z","steps":["trace[667514042] 'read index received' (duration: 44.108885ms)","trace[667514042] 'applied index is now lower than readState.Index' (duration: 92.088562ms)"],"step_count":2}
{"level":"warn","ts":"2025-03-17T10:28:40.180208Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"136.381386ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/metrics-server\" limit:1 ","response":"range_response_count:0 size:5"}
{"level":"info","ts":"2025-03-17T10:28:40.180232Z","caller":"traceutil/trace.go:171","msg":"trace[950724600] range","detail":"{range_begin:/registry/services/endpoints/kube-system/metrics-server; range_end:; response_count:0; response_revision:1341; }","duration":"136.514615ms","start":"2025-03-17T10:28:40.043711Z","end":"2025-03-17T10:28:40.180225Z","steps":["trace[950724600] 'agreement among raft nodes before linearized reading' (duration: 136.302537ms)"],"step_count":1}
{"level":"info","ts":"2025-03-17T10:28:40.180259Z","caller":"traceutil/trace.go:171","msg":"trace[1795335357] transaction","detail":"{read_only:false; number_of_response:1; response_revision:1339; }","duration":"138.550082ms","start":"2025-03-17T10:28:40.041701Z","end":"2025-03-17T10:28:40.180251Z","steps":["trace[1795335357] 'process raft request' (duration: 137.912185ms)"],"step_count":1}
{"level":"info","ts":"2025-03-17T10:28:40.180469Z","caller":"traceutil/trace.go:171","msg":"trace[134814834] transaction","detail":"{read_only:false; response_revision:1341; number_of_response:1; }","duration":"136.54027ms","start":"2025-03-17T10:28:40.043922Z","end":"2025-03-17T10:28:40.180462Z","steps":["trace[134814834] 'process raft request' (duration: 135.879127ms)"],"step_count":1}
{"level":"info","ts":"2025-03-17T10:28:40.180465Z","caller":"traceutil/trace.go:171","msg":"trace[1604488091] transaction","detail":"{read_only:false; response_revision:1340; number_of_response:1; }","duration":"138.709866ms","start":"2025-03-17T10:28:40.041749Z","end":"2025-03-17T10:28:40.180458Z","steps":["trace[1604488091] 'process raft request' (duration: 137.94844ms)"],"step_count":1}
{"level":"info","ts":"2025-03-17T10:29:03.369295Z","caller":"traceutil/trace.go:171","msg":"trace[1188535158] linearizableReadLoop","detail":"{readStateIndex:1596; appliedIndex:1595; }","duration":"141.310584ms","start":"2025-03-17T10:29:03.227971Z","end":"2025-03-17T10:29:03.369282Z","steps":["trace[1188535158] 'read index received' (duration: 141.184653ms)","trace[1188535158] 'applied index is now lower than readState.Index' (duration: 125.518µs)"],"step_count":2}
{"level":"warn","ts":"2025-03-17T10:29:03.369440Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"141.456383ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
{"level":"info","ts":"2025-03-17T10:29:03.369462Z","caller":"traceutil/trace.go:171","msg":"trace[1000095819] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1538; }","duration":"141.490011ms","start":"2025-03-17T10:29:03.227965Z","end":"2025-03-17T10:29:03.369455Z","steps":["trace[1000095819] 'agreement among raft nodes before linearized reading' (duration: 141.378714ms)"],"step_count":1}
{"level":"info","ts":"2025-03-17T10:29:03.369572Z","caller":"traceutil/trace.go:171","msg":"trace[1718267926] transaction","detail":"{read_only:false; response_revision:1538; number_of_response:1; }","duration":"393.328123ms","start":"2025-03-17T10:29:02.976217Z","end":"2025-03-17T10:29:03.369545Z","steps":["trace[1718267926] 'process raft request' (duration: 392.97948ms)"],"step_count":1}
{"level":"warn","ts":"2025-03-17T10:29:03.369687Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-03-17T10:29:02.976200Z","time spent":"393.414466ms","remote":"127.0.0.1:36630","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1098,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1525 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1025 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
{"level":"info","ts":"2025-03-17T10:29:03.794183Z","caller":"traceutil/trace.go:171","msg":"trace[1239377396] transaction","detail":"{read_only:false; response_revision:1539; number_of_response:1; }","duration":"116.400437ms","start":"2025-03-17T10:29:03.677763Z","end":"2025-03-17T10:29:03.794163Z","steps":["trace[1239377396] 'process raft request' (duration: 116.316681ms)"],"step_count":1}
{"level":"info","ts":"2025-03-17T10:29:34.930439Z","caller":"traceutil/trace.go:171","msg":"trace[437173425] linearizableReadLoop","detail":"{readStateIndex:1846; appliedIndex:1845; }","duration":"177.747794ms","start":"2025-03-17T10:29:34.752677Z","end":"2025-03-17T10:29:34.930425Z","steps":["trace[437173425] 'read index received' (duration: 177.44403ms)","trace[437173425] 'applied index is now lower than readState.Index' (duration: 303.247µs)"],"step_count":2}
{"level":"info","ts":"2025-03-17T10:29:34.930672Z","caller":"traceutil/trace.go:171","msg":"trace[1164195021] transaction","detail":"{read_only:false; response_revision:1778; number_of_response:1; }","duration":"292.761079ms","start":"2025-03-17T10:29:34.637900Z","end":"2025-03-17T10:29:34.930661Z","steps":["trace[1164195021] 'process raft request' (duration: 292.363668ms)"],"step_count":1}
{"level":"warn","ts":"2025-03-17T10:29:34.930825Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"178.134877ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/rolebindings/kube-system/csi-hostpathplugin-health-monitor-role\" limit:1 ","response":"range_response_count:0 size:5"}
{"level":"info","ts":"2025-03-17T10:29:34.930868Z","caller":"traceutil/trace.go:171","msg":"trace[2075777145] range","detail":"{range_begin:/registry/rolebindings/kube-system/csi-hostpathplugin-health-monitor-role; range_end:; response_count:0; response_revision:1778; }","duration":"178.208346ms","start":"2025-03-17T10:29:34.752652Z","end":"2025-03-17T10:29:34.930860Z","steps":["trace[2075777145] 'agreement among raft nodes before linearized reading' (duration: 178.140585ms)"],"step_count":1}
{"level":"warn","ts":"2025-03-17T10:29:34.930982Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"154.496526ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
{"level":"info","ts":"2025-03-17T10:29:34.931012Z","caller":"traceutil/trace.go:171","msg":"trace[186494594] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1778; }","duration":"154.545894ms","start":"2025-03-17T10:29:34.776460Z","end":"2025-03-17T10:29:34.931005Z","steps":["trace[186494594] 'agreement among raft nodes before linearized reading' (duration: 154.502432ms)"],"step_count":1}
==> kernel <==
10:31:10 up 5 min, 0 users, load average: 0.88, 1.13, 0.56
Linux addons-415393 5.10.207 #1 SMP Tue Jan 14 08:15:54 UTC 2025 x86_64 GNU/Linux
PRETTY_NAME="Buildroot 2023.02.9"
==> kube-apiserver [5fef9d758082723f5cbf9ff7c90cf93efab2f05f48f4c4847d94bb332f987039] <==
E0317 10:27:42.922750 1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.98.175.135:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.98.175.135:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.98.175.135:443: connect: connection refused" logger="UnhandledError"
I0317 10:27:42.983902 1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
E0317 10:28:24.293022 1 conn.go:339] Error on socket receive: read tcp 192.168.39.132:8443->192.168.39.1:45302: use of closed network connection
E0317 10:28:24.469634 1 conn.go:339] Error on socket receive: read tcp 192.168.39.132:8443->192.168.39.1:45326: use of closed network connection
I0317 10:28:33.650524 1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.107.224.212"}
I0317 10:28:39.780100 1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
W0317 10:28:40.987452 1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
I0317 10:28:43.929647 1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
I0317 10:28:45.264743 1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
I0317 10:28:45.436730 1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.103.209.249"}
I0317 10:29:12.704625 1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
I0317 10:29:33.035523 1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
I0317 10:29:33.035579 1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
I0317 10:29:33.071358 1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
I0317 10:29:33.071463 1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
I0317 10:29:33.102742 1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
I0317 10:29:33.102798 1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
I0317 10:29:33.130350 1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
I0317 10:29:33.130517 1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
I0317 10:29:33.173052 1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
I0317 10:29:33.173078 1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
W0317 10:29:34.130623 1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
W0317 10:29:34.177672 1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
W0317 10:29:34.220697 1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
I0317 10:31:09.175520 1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.101.79.201"}
==> kube-controller-manager [b3a1d2c0c105892a3419add8a0ed666ef01691cebd89dba8d490d798cacea361] <==
E0317 10:30:10.152327 1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
W0317 10:30:11.945192 1 reflector.go:362] The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking, err = the server could not find the requested resource
E0317 10:30:11.946405 1 metadata.go:231] "The watchlist request ended with an error, falling back to the standard LIST semantics" err="the server could not find the requested resource" resource="gadget.kinvolk.io/v1alpha1, Resource=traces"
W0317 10:30:11.947224 1 reflector.go:569] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0317 10:30:11.947295 1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
W0317 10:30:41.188309 1 reflector.go:362] The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking, err = the server could not find the requested resource
E0317 10:30:41.189239 1 metadata.go:231] "The watchlist request ended with an error, falling back to the standard LIST semantics" err="the server could not find the requested resource" resource="snapshot.storage.k8s.io/v1, Resource=volumesnapshotclasses"
W0317 10:30:41.190165 1 reflector.go:569] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0317 10:30:41.190494 1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
W0317 10:30:41.648624 1 reflector.go:362] The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking, err = the server could not find the requested resource
E0317 10:30:41.649941 1 metadata.go:231] "The watchlist request ended with an error, falling back to the standard LIST semantics" err="the server could not find the requested resource" resource="snapshot.storage.k8s.io/v1, Resource=volumesnapshots"
W0317 10:30:41.650811 1 reflector.go:569] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0317 10:30:41.650888 1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
W0317 10:30:51.597028 1 reflector.go:362] The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking, err = the server could not find the requested resource
E0317 10:30:51.597936 1 metadata.go:231] "The watchlist request ended with an error, falling back to the standard LIST semantics" err="the server could not find the requested resource" resource="snapshot.storage.k8s.io/v1, Resource=volumesnapshotcontents"
W0317 10:30:51.598809 1 reflector.go:569] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0317 10:30:51.598874 1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
W0317 10:30:55.606614 1 reflector.go:362] The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking, err = the server could not find the requested resource
E0317 10:30:55.608483 1 metadata.go:231] "The watchlist request ended with an error, falling back to the standard LIST semantics" err="the server could not find the requested resource" resource="gadget.kinvolk.io/v1alpha1, Resource=traces"
W0317 10:30:55.610680 1 reflector.go:569] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0317 10:30:55.610715 1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
I0317 10:31:08.996224 1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-7d9564db4" duration="50.941301ms"
I0317 10:31:09.010929 1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-7d9564db4" duration="14.654773ms"
I0317 10:31:09.023988 1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-7d9564db4" duration="13.006763ms"
I0317 10:31:09.024138 1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-7d9564db4" duration="40.674µs"
==> kube-proxy [375af32f43777df35c5b162c2233c32c4684907de1ed4b0d67dcab46ea3ccf96] <==
add table ip kube-proxy
^^^^^^^^^^^^^^^^^^^^^^^^
>
E0317 10:26:46.055906 1 proxier.go:733] "Error cleaning up nftables rules" err=<
could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
add table ip6 kube-proxy
^^^^^^^^^^^^^^^^^^^^^^^^^
>
I0317 10:26:46.074183 1 server.go:698] "Successfully retrieved node IP(s)" IPs=["192.168.39.132"]
E0317 10:26:46.074296 1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
I0317 10:26:46.155789 1 server_linux.go:147] "No iptables support for family" ipFamily="IPv6"
I0317 10:26:46.155818 1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
I0317 10:26:46.155844 1 server_linux.go:170] "Using iptables Proxier"
I0317 10:26:46.160124 1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
I0317 10:26:46.160348 1 server.go:497] "Version info" version="v1.32.2"
I0317 10:26:46.160360 1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I0317 10:26:46.161835 1 config.go:199] "Starting service config controller"
I0317 10:26:46.161857 1 shared_informer.go:313] Waiting for caches to sync for service config
I0317 10:26:46.161883 1 config.go:105] "Starting endpoint slice config controller"
I0317 10:26:46.161887 1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
I0317 10:26:46.162552 1 config.go:329] "Starting node config controller"
I0317 10:26:46.162568 1 shared_informer.go:313] Waiting for caches to sync for node config
I0317 10:26:46.264485 1 shared_informer.go:320] Caches are synced for node config
I0317 10:26:46.264515 1 shared_informer.go:320] Caches are synced for service config
I0317 10:26:46.264523 1 shared_informer.go:320] Caches are synced for endpoint slice config
==> kube-scheduler [05361b48ca463a59bd79e6e5b46a53f2d471057b93713bec1ff4b30a1c62a31c] <==
W0317 10:26:37.762211 1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
E0317 10:26:37.762300 1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
W0317 10:26:37.765548 1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
E0317 10:26:37.765595 1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
W0317 10:26:37.895516 1 reflector.go:569] runtime/asm_amd64.s:1700: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0317 10:26:37.895561 1 reflector.go:166] "Unhandled Error" err="runtime/asm_amd64.s:1700: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
W0317 10:26:37.913608 1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
E0317 10:26:37.913704 1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
W0317 10:26:37.915023 1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
E0317 10:26:37.915090 1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
W0317 10:26:37.980431 1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
E0317 10:26:37.980477 1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
W0317 10:26:38.026460 1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
E0317 10:26:38.026504 1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
W0317 10:26:38.175335 1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
E0317 10:26:38.175448 1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
W0317 10:26:38.210093 1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
E0317 10:26:38.210229 1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
W0317 10:26:38.216626 1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "volumeattachments" in API group "storage.k8s.io" at the cluster scope
E0317 10:26:38.216675 1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.VolumeAttachment: failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
W0317 10:26:38.236494 1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
E0317 10:26:38.236540 1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
W0317 10:26:38.272590 1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
E0317 10:26:38.274016 1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
I0317 10:26:40.311998 1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
==> kubelet <==
Mar 17 10:30:39 addons-415393 kubelet[1220]: error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
Mar 17 10:30:39 addons-415393 kubelet[1220]: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
Mar 17 10:30:39 addons-415393 kubelet[1220]: Perhaps ip6tables or your kernel needs to be upgraded.
Mar 17 10:30:39 addons-415393 kubelet[1220]: > table="nat" chain="KUBE-KUBELET-CANARY"
Mar 17 10:30:40 addons-415393 kubelet[1220]: E0317 10:30:40.033887 1220 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1742207440033559459,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595375,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
Mar 17 10:30:40 addons-415393 kubelet[1220]: E0317 10:30:40.033931 1220 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1742207440033559459,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595375,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
Mar 17 10:30:50 addons-415393 kubelet[1220]: E0317 10:30:50.036202 1220 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1742207450035697311,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595375,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
Mar 17 10:30:50 addons-415393 kubelet[1220]: E0317 10:30:50.036229 1220 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1742207450035697311,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595375,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
Mar 17 10:31:00 addons-415393 kubelet[1220]: E0317 10:31:00.039766 1220 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1742207460039234660,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595375,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
Mar 17 10:31:00 addons-415393 kubelet[1220]: E0317 10:31:00.040045 1220 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1742207460039234660,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595375,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
Mar 17 10:31:08 addons-415393 kubelet[1220]: I0317 10:31:08.998533 1220 memory_manager.go:355] "RemoveStaleState removing state" podUID="1b68c24f-6f09-466e-845d-62d4ef663eb7" containerName="csi-external-health-monitor-controller"
Mar 17 10:31:08 addons-415393 kubelet[1220]: I0317 10:31:08.998936 1220 memory_manager.go:355] "RemoveStaleState removing state" podUID="cf5c147c-8add-429f-8e3d-22a1a3626fdb" containerName="csi-resizer"
Mar 17 10:31:08 addons-415393 kubelet[1220]: I0317 10:31:08.998993 1220 memory_manager.go:355] "RemoveStaleState removing state" podUID="2b441913-925d-4780-9896-3904e05ad034" containerName="volume-snapshot-controller"
Mar 17 10:31:08 addons-415393 kubelet[1220]: I0317 10:31:08.999030 1220 memory_manager.go:355] "RemoveStaleState removing state" podUID="1b68c24f-6f09-466e-845d-62d4ef663eb7" containerName="liveness-probe"
Mar 17 10:31:08 addons-415393 kubelet[1220]: I0317 10:31:08.999065 1220 memory_manager.go:355] "RemoveStaleState removing state" podUID="42fde7bc-0fe4-4fb3-bcc3-401b8ac7b528" containerName="task-pv-container"
Mar 17 10:31:08 addons-415393 kubelet[1220]: I0317 10:31:08.999098 1220 memory_manager.go:355] "RemoveStaleState removing state" podUID="1b68c24f-6f09-466e-845d-62d4ef663eb7" containerName="node-driver-registrar"
Mar 17 10:31:08 addons-415393 kubelet[1220]: I0317 10:31:08.999132 1220 memory_manager.go:355] "RemoveStaleState removing state" podUID="1b68c24f-6f09-466e-845d-62d4ef663eb7" containerName="hostpath"
Mar 17 10:31:08 addons-415393 kubelet[1220]: I0317 10:31:08.999175 1220 memory_manager.go:355] "RemoveStaleState removing state" podUID="0f0b28fa-c4c1-45ee-b1fc-87d55fe5c106" containerName="volume-snapshot-controller"
Mar 17 10:31:08 addons-415393 kubelet[1220]: I0317 10:31:08.999263 1220 memory_manager.go:355] "RemoveStaleState removing state" podUID="6f54e0cc-d6b0-4439-ac5d-72a4ca1a0e3a" containerName="csi-attacher"
Mar 17 10:31:08 addons-415393 kubelet[1220]: I0317 10:31:08.999299 1220 memory_manager.go:355] "RemoveStaleState removing state" podUID="1b68c24f-6f09-466e-845d-62d4ef663eb7" containerName="csi-provisioner"
Mar 17 10:31:08 addons-415393 kubelet[1220]: I0317 10:31:08.999333 1220 memory_manager.go:355] "RemoveStaleState removing state" podUID="1b68c24f-6f09-466e-845d-62d4ef663eb7" containerName="csi-snapshotter"
Mar 17 10:31:09 addons-415393 kubelet[1220]: I0317 10:31:09.046940 1220 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-77zhk\" (UniqueName: \"kubernetes.io/projected/809f6e34-5eb6-4f8d-b8cd-fe1066c882f6-kube-api-access-77zhk\") pod \"hello-world-app-7d9564db4-dpnkv\" (UID: \"809f6e34-5eb6-4f8d-b8cd-fe1066c882f6\") " pod="default/hello-world-app-7d9564db4-dpnkv"
Mar 17 10:31:10 addons-415393 kubelet[1220]: E0317 10:31:10.042924 1220 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1742207470041909980,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595375,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
Mar 17 10:31:10 addons-415393 kubelet[1220]: E0317 10:31:10.042949 1220 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1742207470041909980,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595375,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
Mar 17 10:31:10 addons-415393 kubelet[1220]: I0317 10:31:10.813263 1220 kubelet_pods.go:1021] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/amd-gpu-device-plugin-wwrf9" secret="" err="secret \"gcp-auth\" not found"
==> storage-provisioner [7e9ed244614c2090e3d50a8f5c3f754560f80216cb988d4cd912a7e347903f0d] <==
I0317 10:26:51.670061 1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
I0317 10:26:51.771864 1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
I0317 10:26:51.771928 1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
I0317 10:26:51.852448 1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
I0317 10:26:51.852622 1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-415393_738d246f-bff7-4ba8-b7ed-d8c59512b27f!
I0317 10:26:51.852666 1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"5ca34c28-9d92-465d-982e-28c243536642", APIVersion:"v1", ResourceVersion:"687", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-415393_738d246f-bff7-4ba8-b7ed-d8c59512b27f became leader
I0317 10:26:51.953673 1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-415393_738d246f-bff7-4ba8-b7ed-d8c59512b27f!
-- /stdout --
helpers_test.go:254: (dbg) Run: out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-415393 -n addons-415393
helpers_test.go:261: (dbg) Run: kubectl --context addons-415393 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: hello-world-app-7d9564db4-dpnkv ingress-nginx-admission-create-m7dfw ingress-nginx-admission-patch-2rdmh
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:277: (dbg) Run: kubectl --context addons-415393 describe pod hello-world-app-7d9564db4-dpnkv ingress-nginx-admission-create-m7dfw ingress-nginx-admission-patch-2rdmh
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-415393 describe pod hello-world-app-7d9564db4-dpnkv ingress-nginx-admission-create-m7dfw ingress-nginx-admission-patch-2rdmh: exit status 1 (67.59969ms)
-- stdout --
Name: hello-world-app-7d9564db4-dpnkv
Namespace: default
Priority: 0
Service Account: default
Node: addons-415393/192.168.39.132
Start Time: Mon, 17 Mar 2025 10:31:08 +0000
Labels: app=hello-world-app
pod-template-hash=7d9564db4
Annotations: <none>
Status: Pending
IP:
IPs: <none>
Controlled By: ReplicaSet/hello-world-app-7d9564db4
Containers:
hello-world-app:
Container ID:
Image: docker.io/kicbase/echo-server:1.0
Image ID:
Port: 8080/TCP
Host Port: 0/TCP
State: Waiting
Reason: ContainerCreating
Ready: False
Restart Count: 0
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-77zhk (ro)
Conditions:
Type Status
PodReadyToStartContainers False
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
kube-api-access-77zhk:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 3s default-scheduler Successfully assigned default/hello-world-app-7d9564db4-dpnkv to addons-415393
Normal Pulling 2s kubelet Pulling image "docker.io/kicbase/echo-server:1.0"
-- /stdout --
** stderr **
Error from server (NotFound): pods "ingress-nginx-admission-create-m7dfw" not found
Error from server (NotFound): pods "ingress-nginx-admission-patch-2rdmh" not found
** /stderr **
helpers_test.go:279: kubectl --context addons-415393 describe pod hello-world-app-7d9564db4-dpnkv ingress-nginx-admission-create-m7dfw ingress-nginx-admission-patch-2rdmh: exit status 1
addons_test.go:992: (dbg) Run: out/minikube-linux-amd64 -p addons-415393 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-415393 addons disable ingress-dns --alsologtostderr -v=1: (1.335705147s)
addons_test.go:992: (dbg) Run: out/minikube-linux-amd64 -p addons-415393 addons disable ingress --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-415393 addons disable ingress --alsologtostderr -v=1: (7.645556059s)
--- FAIL: TestAddons/parallel/Ingress (155.40s)