=== RUN TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress
=== CONT TestAddons/parallel/Ingress
addons_test.go:207: (dbg) Run: kubectl --context addons-012915 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:232: (dbg) Run: kubectl --context addons-012915 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:245: (dbg) Run: kubectl --context addons-012915 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [8de7ed01-2923-4e6d-8d79-73b590e77823] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [8de7ed01-2923-4e6d-8d79-73b590e77823] Running
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 10.003753409s
I0317 12:44:47.246814 629188 kapi.go:150] Service nginx in namespace default found.
addons_test.go:262: (dbg) Run: out/minikube-linux-amd64 -p addons-012915 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-012915 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m9.743909845s)
** stderr **
ssh: Process exited with status 28
** /stderr **
addons_test.go:278: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:286: (dbg) Run: kubectl --context addons-012915 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:291: (dbg) Run: out/minikube-linux-amd64 -p addons-012915 ip
addons_test.go:297: (dbg) Run: nslookup hello-john.test 192.168.39.84
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run: out/minikube-linux-amd64 status --format={{.Host}} -p addons-012915 -n addons-012915
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======> post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run: out/minikube-linux-amd64 -p addons-012915 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-012915 logs -n 25: (1.129064349s)
helpers_test.go:252: TestAddons/parallel/Ingress logs:
-- stdout --
==> Audit <==
|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
| Command | Args | Profile | User | Version | Start Time | End Time |
|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
| delete | -p download-only-534794 | download-only-534794 | jenkins | v1.35.0 | 17 Mar 25 12:41 UTC | 17 Mar 25 12:41 UTC |
| delete | -p download-only-793997 | download-only-793997 | jenkins | v1.35.0 | 17 Mar 25 12:41 UTC | 17 Mar 25 12:41 UTC |
| delete | -p download-only-534794 | download-only-534794 | jenkins | v1.35.0 | 17 Mar 25 12:41 UTC | 17 Mar 25 12:41 UTC |
| start | --download-only -p | binary-mirror-652834 | jenkins | v1.35.0 | 17 Mar 25 12:41 UTC | |
| | binary-mirror-652834 | | | | | |
| | --alsologtostderr | | | | | |
| | --binary-mirror | | | | | |
| | http://127.0.0.1:41719 | | | | | |
| | --driver=kvm2 | | | | | |
| | --container-runtime=crio | | | | | |
| delete | -p binary-mirror-652834 | binary-mirror-652834 | jenkins | v1.35.0 | 17 Mar 25 12:41 UTC | 17 Mar 25 12:41 UTC |
| addons | disable dashboard -p | addons-012915 | jenkins | v1.35.0 | 17 Mar 25 12:41 UTC | |
| | addons-012915 | | | | | |
| addons | enable dashboard -p | addons-012915 | jenkins | v1.35.0 | 17 Mar 25 12:41 UTC | |
| | addons-012915 | | | | | |
| start | -p addons-012915 --wait=true | addons-012915 | jenkins | v1.35.0 | 17 Mar 25 12:41 UTC | 17 Mar 25 12:43 UTC |
| | --memory=4000 --alsologtostderr | | | | | |
| | --addons=registry | | | | | |
| | --addons=metrics-server | | | | | |
| | --addons=volumesnapshots | | | | | |
| | --addons=csi-hostpath-driver | | | | | |
| | --addons=gcp-auth | | | | | |
| | --addons=cloud-spanner | | | | | |
| | --addons=inspektor-gadget | | | | | |
| | --addons=nvidia-device-plugin | | | | | |
| | --addons=yakd --addons=volcano | | | | | |
| | --addons=amd-gpu-device-plugin | | | | | |
| | --driver=kvm2 | | | | | |
| | --container-runtime=crio | | | | | |
| | --addons=ingress | | | | | |
| | --addons=ingress-dns | | | | | |
| | --addons=storage-provisioner-rancher | | | | | |
| addons | addons-012915 addons disable | addons-012915 | jenkins | v1.35.0 | 17 Mar 25 12:43 UTC | 17 Mar 25 12:43 UTC |
| | volcano --alsologtostderr -v=1 | | | | | |
| addons | addons-012915 addons disable | addons-012915 | jenkins | v1.35.0 | 17 Mar 25 12:43 UTC | 17 Mar 25 12:44 UTC |
| | gcp-auth --alsologtostderr | | | | | |
| | -v=1 | | | | | |
| addons | addons-012915 addons | addons-012915 | jenkins | v1.35.0 | 17 Mar 25 12:44 UTC | 17 Mar 25 12:44 UTC |
| | disable nvidia-device-plugin | | | | | |
| | --alsologtostderr -v=1 | | | | | |
| addons | addons-012915 addons disable | addons-012915 | jenkins | v1.35.0 | 17 Mar 25 12:44 UTC | 17 Mar 25 12:44 UTC |
| | yakd --alsologtostderr -v=1 | | | | | |
| ssh | addons-012915 ssh cat | addons-012915 | jenkins | v1.35.0 | 17 Mar 25 12:44 UTC | 17 Mar 25 12:44 UTC |
| | /opt/local-path-provisioner/pvc-dfd0802a-c635-46e0-a42e-5cc628c5aa4b_default_test-pvc/file1 | | | | | |
| addons | addons-012915 addons | addons-012915 | jenkins | v1.35.0 | 17 Mar 25 12:44 UTC | 17 Mar 25 12:44 UTC |
| | disable cloud-spanner | | | | | |
| | --alsologtostderr -v=1 | | | | | |
| addons | addons-012915 addons disable | addons-012915 | jenkins | v1.35.0 | 17 Mar 25 12:44 UTC | 17 Mar 25 12:45 UTC |
| | storage-provisioner-rancher | | | | | |
| | --alsologtostderr -v=1 | | | | | |
| addons | enable headlamp | addons-012915 | jenkins | v1.35.0 | 17 Mar 25 12:44 UTC | 17 Mar 25 12:44 UTC |
| | -p addons-012915 | | | | | |
| | --alsologtostderr -v=1 | | | | | |
| ip | addons-012915 ip | addons-012915 | jenkins | v1.35.0 | 17 Mar 25 12:44 UTC | 17 Mar 25 12:44 UTC |
| addons | addons-012915 addons disable | addons-012915 | jenkins | v1.35.0 | 17 Mar 25 12:44 UTC | 17 Mar 25 12:44 UTC |
| | registry --alsologtostderr | | | | | |
| | -v=1 | | | | | |
| addons | addons-012915 addons | addons-012915 | jenkins | v1.35.0 | 17 Mar 25 12:44 UTC | 17 Mar 25 12:44 UTC |
| | disable inspektor-gadget | | | | | |
| | --alsologtostderr -v=1 | | | | | |
| addons | addons-012915 addons disable | addons-012915 | jenkins | v1.35.0 | 17 Mar 25 12:44 UTC | 17 Mar 25 12:44 UTC |
| | headlamp --alsologtostderr | | | | | |
| | -v=1 | | | | | |
| addons | addons-012915 addons | addons-012915 | jenkins | v1.35.0 | 17 Mar 25 12:44 UTC | 17 Mar 25 12:44 UTC |
| | disable metrics-server | | | | | |
| | --alsologtostderr -v=1 | | | | | |
| ssh | addons-012915 ssh curl -s | addons-012915 | jenkins | v1.35.0 | 17 Mar 25 12:44 UTC | |
| | http://127.0.0.1/ -H 'Host: | | | | | |
| | nginx.example.com' | | | | | |
| addons | addons-012915 addons | addons-012915 | jenkins | v1.35.0 | 17 Mar 25 12:44 UTC | 17 Mar 25 12:45 UTC |
| | disable volumesnapshots | | | | | |
| | --alsologtostderr -v=1 | | | | | |
| addons | addons-012915 addons | addons-012915 | jenkins | v1.35.0 | 17 Mar 25 12:45 UTC | 17 Mar 25 12:45 UTC |
| | disable csi-hostpath-driver | | | | | |
| | --alsologtostderr -v=1 | | | | | |
| ip | addons-012915 ip | addons-012915 | jenkins | v1.35.0 | 17 Mar 25 12:46 UTC | 17 Mar 25 12:46 UTC |
|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
==> Last Start <==
Log file created at: 2025/03/17 12:41:35
Running on machine: ubuntu-20-agent-2
Binary: Built with gc go1.24.0 for linux/amd64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I0317 12:41:35.813646 629808 out.go:345] Setting OutFile to fd 1 ...
I0317 12:41:35.813855 629808 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0317 12:41:35.813863 629808 out.go:358] Setting ErrFile to fd 2...
I0317 12:41:35.813867 629808 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0317 12:41:35.814035 629808 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20539-621978/.minikube/bin
I0317 12:41:35.815145 629808 out.go:352] Setting JSON to false
I0317 12:41:35.816455 629808 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":8640,"bootTime":1742206656,"procs":180,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1078-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
I0317 12:41:35.816546 629808 start.go:139] virtualization: kvm guest
I0317 12:41:35.818208 629808 out.go:177] * [addons-012915] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
I0317 12:41:35.819935 629808 out.go:177] - MINIKUBE_LOCATION=20539
I0317 12:41:35.819956 629808 notify.go:220] Checking for updates...
I0317 12:41:35.822132 629808 out.go:177] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I0317 12:41:35.823194 629808 out.go:177] - KUBECONFIG=/home/jenkins/minikube-integration/20539-621978/kubeconfig
I0317 12:41:35.824478 629808 out.go:177] - MINIKUBE_HOME=/home/jenkins/minikube-integration/20539-621978/.minikube
I0317 12:41:35.825562 629808 out.go:177] - MINIKUBE_BIN=out/minikube-linux-amd64
I0317 12:41:35.826614 629808 out.go:177] - MINIKUBE_FORCE_SYSTEMD=
I0317 12:41:35.827758 629808 driver.go:394] Setting default libvirt URI to qemu:///system
I0317 12:41:35.858658 629808 out.go:177] * Using the kvm2 driver based on user configuration
I0317 12:41:35.859786 629808 start.go:297] selected driver: kvm2
I0317 12:41:35.859802 629808 start.go:901] validating driver "kvm2" against <nil>
I0317 12:41:35.859822 629808 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0317 12:41:35.860719 629808 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0317 12:41:35.860816 629808 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20539-621978/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
I0317 12:41:35.876109 629808 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
I0317 12:41:35.876173 629808 start_flags.go:310] no existing cluster config was found, will generate one from the flags
I0317 12:41:35.876434 629808 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I0317 12:41:35.876472 629808 cni.go:84] Creating CNI manager for ""
I0317 12:41:35.876526 629808 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
I0317 12:41:35.876536 629808 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
I0317 12:41:35.876587 629808 start.go:340] cluster config:
{Name:addons-012915 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:addons-012915 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAg
entPID:0 GPUs: AutoPauseInterval:1m0s}
I0317 12:41:35.876691 629808 iso.go:125] acquiring lock: {Name:mk5ae9489b9a7b0ce1eec6303442deb1b82bdd34 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0317 12:41:35.878380 629808 out.go:177] * Starting "addons-012915" primary control-plane node in "addons-012915" cluster
I0317 12:41:35.879518 629808 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime crio
I0317 12:41:35.879588 629808 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20539-621978/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4
I0317 12:41:35.879602 629808 cache.go:56] Caching tarball of preloaded images
I0317 12:41:35.880150 629808 preload.go:172] Found /home/jenkins/minikube-integration/20539-621978/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
I0317 12:41:35.880183 629808 cache.go:59] Finished verifying existence of preloaded tar for v1.32.2 on crio
I0317 12:41:35.880991 629808 profile.go:143] Saving config to /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/addons-012915/config.json ...
I0317 12:41:35.881026 629808 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/addons-012915/config.json: {Name:mk1005f934882c41acab1ea5c234ee630faed466 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0317 12:41:35.881331 629808 start.go:360] acquireMachinesLock for addons-012915: {Name:mk889c42346a1f2803dd912b56533342807c90af Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0317 12:41:35.881758 629808 start.go:364] duration metric: took 387.934µs to acquireMachinesLock for "addons-012915"
I0317 12:41:35.881790 629808 start.go:93] Provisioning new machine with config: &{Name:addons-012915 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterNa
me:addons-012915 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bi
naryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}
I0317 12:41:35.881853 629808 start.go:125] createHost starting for "" (driver="kvm2")
I0317 12:41:35.883346 629808 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
I0317 12:41:35.883520 629808 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0317 12:41:35.883658 629808 main.go:141] libmachine: Launching plugin server for driver kvm2
I0317 12:41:35.897979 629808 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42141
I0317 12:41:35.898441 629808 main.go:141] libmachine: () Calling .GetVersion
I0317 12:41:35.898962 629808 main.go:141] libmachine: Using API Version 1
I0317 12:41:35.898990 629808 main.go:141] libmachine: () Calling .SetConfigRaw
I0317 12:41:35.899318 629808 main.go:141] libmachine: () Calling .GetMachineName
I0317 12:41:35.899513 629808 main.go:141] libmachine: (addons-012915) Calling .GetMachineName
I0317 12:41:35.899678 629808 main.go:141] libmachine: (addons-012915) Calling .DriverName
I0317 12:41:35.899823 629808 start.go:159] libmachine.API.Create for "addons-012915" (driver="kvm2")
I0317 12:41:35.899860 629808 client.go:168] LocalClient.Create starting
I0317 12:41:35.899902 629808 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/20539-621978/.minikube/certs/ca.pem
I0317 12:41:36.032339 629808 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/20539-621978/.minikube/certs/cert.pem
I0317 12:41:36.670062 629808 main.go:141] libmachine: Running pre-create checks...
I0317 12:41:36.670095 629808 main.go:141] libmachine: (addons-012915) Calling .PreCreateCheck
I0317 12:41:36.670614 629808 main.go:141] libmachine: (addons-012915) Calling .GetConfigRaw
I0317 12:41:36.671069 629808 main.go:141] libmachine: Creating machine...
I0317 12:41:36.671086 629808 main.go:141] libmachine: (addons-012915) Calling .Create
I0317 12:41:36.671309 629808 main.go:141] libmachine: (addons-012915) creating KVM machine...
I0317 12:41:36.671332 629808 main.go:141] libmachine: (addons-012915) creating network...
I0317 12:41:36.672732 629808 main.go:141] libmachine: (addons-012915) DBG | found existing default KVM network
I0317 12:41:36.673529 629808 main.go:141] libmachine: (addons-012915) DBG | I0317 12:41:36.673309 629830 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00011ef20}
I0317 12:41:36.673562 629808 main.go:141] libmachine: (addons-012915) DBG | created network xml:
I0317 12:41:36.673597 629808 main.go:141] libmachine: (addons-012915) DBG | <network>
I0317 12:41:36.673613 629808 main.go:141] libmachine: (addons-012915) DBG | <name>mk-addons-012915</name>
I0317 12:41:36.673620 629808 main.go:141] libmachine: (addons-012915) DBG | <dns enable='no'/>
I0317 12:41:36.673627 629808 main.go:141] libmachine: (addons-012915) DBG |
I0317 12:41:36.673634 629808 main.go:141] libmachine: (addons-012915) DBG | <ip address='192.168.39.1' netmask='255.255.255.0'>
I0317 12:41:36.673640 629808 main.go:141] libmachine: (addons-012915) DBG | <dhcp>
I0317 12:41:36.673646 629808 main.go:141] libmachine: (addons-012915) DBG | <range start='192.168.39.2' end='192.168.39.253'/>
I0317 12:41:36.673653 629808 main.go:141] libmachine: (addons-012915) DBG | </dhcp>
I0317 12:41:36.673657 629808 main.go:141] libmachine: (addons-012915) DBG | </ip>
I0317 12:41:36.673663 629808 main.go:141] libmachine: (addons-012915) DBG |
I0317 12:41:36.673668 629808 main.go:141] libmachine: (addons-012915) DBG | </network>
I0317 12:41:36.673678 629808 main.go:141] libmachine: (addons-012915) DBG |
I0317 12:41:36.678627 629808 main.go:141] libmachine: (addons-012915) DBG | trying to create private KVM network mk-addons-012915 192.168.39.0/24...
I0317 12:41:36.743769 629808 main.go:141] libmachine: (addons-012915) DBG | private KVM network mk-addons-012915 192.168.39.0/24 created
I0317 12:41:36.743808 629808 main.go:141] libmachine: (addons-012915) DBG | I0317 12:41:36.743736 629830 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/20539-621978/.minikube
I0317 12:41:36.743839 629808 main.go:141] libmachine: (addons-012915) setting up store path in /home/jenkins/minikube-integration/20539-621978/.minikube/machines/addons-012915 ...
I0317 12:41:36.743862 629808 main.go:141] libmachine: (addons-012915) building disk image from file:///home/jenkins/minikube-integration/20539-621978/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso
I0317 12:41:36.743883 629808 main.go:141] libmachine: (addons-012915) Downloading /home/jenkins/minikube-integration/20539-621978/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20539-621978/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso...
I0317 12:41:37.007383 629808 main.go:141] libmachine: (addons-012915) DBG | I0317 12:41:37.007238 629830 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/20539-621978/.minikube/machines/addons-012915/id_rsa...
I0317 12:41:37.122966 629808 main.go:141] libmachine: (addons-012915) DBG | I0317 12:41:37.122814 629830 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/20539-621978/.minikube/machines/addons-012915/addons-012915.rawdisk...
I0317 12:41:37.122998 629808 main.go:141] libmachine: (addons-012915) DBG | Writing magic tar header
I0317 12:41:37.123011 629808 main.go:141] libmachine: (addons-012915) DBG | Writing SSH key tar header
I0317 12:41:37.123022 629808 main.go:141] libmachine: (addons-012915) DBG | I0317 12:41:37.122940 629830 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/20539-621978/.minikube/machines/addons-012915 ...
I0317 12:41:37.123037 629808 main.go:141] libmachine: (addons-012915) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20539-621978/.minikube/machines/addons-012915
I0317 12:41:37.123059 629808 main.go:141] libmachine: (addons-012915) setting executable bit set on /home/jenkins/minikube-integration/20539-621978/.minikube/machines/addons-012915 (perms=drwx------)
I0317 12:41:37.123075 629808 main.go:141] libmachine: (addons-012915) setting executable bit set on /home/jenkins/minikube-integration/20539-621978/.minikube/machines (perms=drwxr-xr-x)
I0317 12:41:37.123088 629808 main.go:141] libmachine: (addons-012915) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20539-621978/.minikube/machines
I0317 12:41:37.123099 629808 main.go:141] libmachine: (addons-012915) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20539-621978/.minikube
I0317 12:41:37.123104 629808 main.go:141] libmachine: (addons-012915) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20539-621978
I0317 12:41:37.123111 629808 main.go:141] libmachine: (addons-012915) DBG | checking permissions on dir: /home/jenkins/minikube-integration
I0317 12:41:37.123115 629808 main.go:141] libmachine: (addons-012915) DBG | checking permissions on dir: /home/jenkins
I0317 12:41:37.123122 629808 main.go:141] libmachine: (addons-012915) setting executable bit set on /home/jenkins/minikube-integration/20539-621978/.minikube (perms=drwxr-xr-x)
I0317 12:41:37.123135 629808 main.go:141] libmachine: (addons-012915) setting executable bit set on /home/jenkins/minikube-integration/20539-621978 (perms=drwxrwxr-x)
I0317 12:41:37.123144 629808 main.go:141] libmachine: (addons-012915) setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
I0317 12:41:37.123157 629808 main.go:141] libmachine: (addons-012915) setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
I0317 12:41:37.123166 629808 main.go:141] libmachine: (addons-012915) DBG | checking permissions on dir: /home
I0317 12:41:37.123179 629808 main.go:141] libmachine: (addons-012915) DBG | skipping /home - not owner
I0317 12:41:37.123189 629808 main.go:141] libmachine: (addons-012915) creating domain...
I0317 12:41:37.124332 629808 main.go:141] libmachine: (addons-012915) define libvirt domain using xml:
I0317 12:41:37.124357 629808 main.go:141] libmachine: (addons-012915) <domain type='kvm'>
I0317 12:41:37.124368 629808 main.go:141] libmachine: (addons-012915) <name>addons-012915</name>
I0317 12:41:37.124380 629808 main.go:141] libmachine: (addons-012915) <memory unit='MiB'>4000</memory>
I0317 12:41:37.124393 629808 main.go:141] libmachine: (addons-012915) <vcpu>2</vcpu>
I0317 12:41:37.124399 629808 main.go:141] libmachine: (addons-012915) <features>
I0317 12:41:37.124408 629808 main.go:141] libmachine: (addons-012915) <acpi/>
I0317 12:41:37.124417 629808 main.go:141] libmachine: (addons-012915) <apic/>
I0317 12:41:37.124425 629808 main.go:141] libmachine: (addons-012915) <pae/>
I0317 12:41:37.124432 629808 main.go:141] libmachine: (addons-012915)
I0317 12:41:37.124437 629808 main.go:141] libmachine: (addons-012915) </features>
I0317 12:41:37.124444 629808 main.go:141] libmachine: (addons-012915) <cpu mode='host-passthrough'>
I0317 12:41:37.124452 629808 main.go:141] libmachine: (addons-012915)
I0317 12:41:37.124457 629808 main.go:141] libmachine: (addons-012915) </cpu>
I0317 12:41:37.124512 629808 main.go:141] libmachine: (addons-012915) <os>
I0317 12:41:37.124545 629808 main.go:141] libmachine: (addons-012915) <type>hvm</type>
I0317 12:41:37.124560 629808 main.go:141] libmachine: (addons-012915) <boot dev='cdrom'/>
I0317 12:41:37.124570 629808 main.go:141] libmachine: (addons-012915) <boot dev='hd'/>
I0317 12:41:37.124581 629808 main.go:141] libmachine: (addons-012915) <bootmenu enable='no'/>
I0317 12:41:37.124590 629808 main.go:141] libmachine: (addons-012915) </os>
I0317 12:41:37.124599 629808 main.go:141] libmachine: (addons-012915) <devices>
I0317 12:41:37.124614 629808 main.go:141] libmachine: (addons-012915) <disk type='file' device='cdrom'>
I0317 12:41:37.124668 629808 main.go:141] libmachine: (addons-012915) <source file='/home/jenkins/minikube-integration/20539-621978/.minikube/machines/addons-012915/boot2docker.iso'/>
I0317 12:41:37.124692 629808 main.go:141] libmachine: (addons-012915) <target dev='hdc' bus='scsi'/>
I0317 12:41:37.124703 629808 main.go:141] libmachine: (addons-012915) <readonly/>
I0317 12:41:37.124714 629808 main.go:141] libmachine: (addons-012915) </disk>
I0317 12:41:37.124729 629808 main.go:141] libmachine: (addons-012915) <disk type='file' device='disk'>
I0317 12:41:37.124743 629808 main.go:141] libmachine: (addons-012915) <driver name='qemu' type='raw' cache='default' io='threads' />
I0317 12:41:37.124769 629808 main.go:141] libmachine: (addons-012915) <source file='/home/jenkins/minikube-integration/20539-621978/.minikube/machines/addons-012915/addons-012915.rawdisk'/>
I0317 12:41:37.124790 629808 main.go:141] libmachine: (addons-012915) <target dev='hda' bus='virtio'/>
I0317 12:41:37.124800 629808 main.go:141] libmachine: (addons-012915) </disk>
I0317 12:41:37.124814 629808 main.go:141] libmachine: (addons-012915) <interface type='network'>
I0317 12:41:37.124827 629808 main.go:141] libmachine: (addons-012915) <source network='mk-addons-012915'/>
I0317 12:41:37.124837 629808 main.go:141] libmachine: (addons-012915) <model type='virtio'/>
I0317 12:41:37.124845 629808 main.go:141] libmachine: (addons-012915) </interface>
I0317 12:41:37.124852 629808 main.go:141] libmachine: (addons-012915) <interface type='network'>
I0317 12:41:37.124857 629808 main.go:141] libmachine: (addons-012915) <source network='default'/>
I0317 12:41:37.124863 629808 main.go:141] libmachine: (addons-012915) <model type='virtio'/>
I0317 12:41:37.124869 629808 main.go:141] libmachine: (addons-012915) </interface>
I0317 12:41:37.124878 629808 main.go:141] libmachine: (addons-012915) <serial type='pty'>
I0317 12:41:37.124891 629808 main.go:141] libmachine: (addons-012915) <target port='0'/>
I0317 12:41:37.124904 629808 main.go:141] libmachine: (addons-012915) </serial>
I0317 12:41:37.124921 629808 main.go:141] libmachine: (addons-012915) <console type='pty'>
I0317 12:41:37.124939 629808 main.go:141] libmachine: (addons-012915) <target type='serial' port='0'/>
I0317 12:41:37.124949 629808 main.go:141] libmachine: (addons-012915) </console>
I0317 12:41:37.124956 629808 main.go:141] libmachine: (addons-012915) <rng model='virtio'>
I0317 12:41:37.124969 629808 main.go:141] libmachine: (addons-012915) <backend model='random'>/dev/random</backend>
I0317 12:41:37.124974 629808 main.go:141] libmachine: (addons-012915) </rng>
I0317 12:41:37.124981 629808 main.go:141] libmachine: (addons-012915)
I0317 12:41:37.124985 629808 main.go:141] libmachine: (addons-012915)
I0317 12:41:37.124990 629808 main.go:141] libmachine: (addons-012915) </devices>
I0317 12:41:37.124996 629808 main.go:141] libmachine: (addons-012915) </domain>
I0317 12:41:37.125003 629808 main.go:141] libmachine: (addons-012915)
I0317 12:41:37.128694 629808 main.go:141] libmachine: (addons-012915) DBG | domain addons-012915 has defined MAC address 52:54:00:88:e3:dc in network default
I0317 12:41:37.129250 629808 main.go:141] libmachine: (addons-012915) starting domain...
I0317 12:41:37.129270 629808 main.go:141] libmachine: (addons-012915) ensuring networks are active...
I0317 12:41:37.129282 629808 main.go:141] libmachine: (addons-012915) DBG | domain addons-012915 has defined MAC address 52:54:00:2b:05:f6 in network mk-addons-012915
I0317 12:41:37.129925 629808 main.go:141] libmachine: (addons-012915) Ensuring network default is active
I0317 12:41:37.130245 629808 main.go:141] libmachine: (addons-012915) Ensuring network mk-addons-012915 is active
I0317 12:41:37.130755 629808 main.go:141] libmachine: (addons-012915) getting domain XML...
I0317 12:41:37.131556 629808 main.go:141] libmachine: (addons-012915) creating domain...
I0317 12:41:38.311165 629808 main.go:141] libmachine: (addons-012915) waiting for IP...
I0317 12:41:38.311986 629808 main.go:141] libmachine: (addons-012915) DBG | domain addons-012915 has defined MAC address 52:54:00:2b:05:f6 in network mk-addons-012915
I0317 12:41:38.312393 629808 main.go:141] libmachine: (addons-012915) DBG | unable to find current IP address of domain addons-012915 in network mk-addons-012915
I0317 12:41:38.312465 629808 main.go:141] libmachine: (addons-012915) DBG | I0317 12:41:38.312395 629830 retry.go:31] will retry after 253.029131ms: waiting for domain to come up
I0317 12:41:38.566539 629808 main.go:141] libmachine: (addons-012915) DBG | domain addons-012915 has defined MAC address 52:54:00:2b:05:f6 in network mk-addons-012915
I0317 12:41:38.566908 629808 main.go:141] libmachine: (addons-012915) DBG | unable to find current IP address of domain addons-012915 in network mk-addons-012915
I0317 12:41:38.566934 629808 main.go:141] libmachine: (addons-012915) DBG | I0317 12:41:38.566865 629830 retry.go:31] will retry after 239.315749ms: waiting for domain to come up
I0317 12:41:38.808393 629808 main.go:141] libmachine: (addons-012915) DBG | domain addons-012915 has defined MAC address 52:54:00:2b:05:f6 in network mk-addons-012915
I0317 12:41:38.808821 629808 main.go:141] libmachine: (addons-012915) DBG | unable to find current IP address of domain addons-012915 in network mk-addons-012915
I0317 12:41:38.808865 629808 main.go:141] libmachine: (addons-012915) DBG | I0317 12:41:38.808771 629830 retry.go:31] will retry after 361.01477ms: waiting for domain to come up
I0317 12:41:39.171325 629808 main.go:141] libmachine: (addons-012915) DBG | domain addons-012915 has defined MAC address 52:54:00:2b:05:f6 in network mk-addons-012915
I0317 12:41:39.171724 629808 main.go:141] libmachine: (addons-012915) DBG | unable to find current IP address of domain addons-012915 in network mk-addons-012915
I0317 12:41:39.171793 629808 main.go:141] libmachine: (addons-012915) DBG | I0317 12:41:39.171731 629830 retry.go:31] will retry after 460.672416ms: waiting for domain to come up
I0317 12:41:39.634438 629808 main.go:141] libmachine: (addons-012915) DBG | domain addons-012915 has defined MAC address 52:54:00:2b:05:f6 in network mk-addons-012915
I0317 12:41:39.634848 629808 main.go:141] libmachine: (addons-012915) DBG | unable to find current IP address of domain addons-012915 in network mk-addons-012915
I0317 12:41:39.634883 629808 main.go:141] libmachine: (addons-012915) DBG | I0317 12:41:39.634808 629830 retry.go:31] will retry after 481.725022ms: waiting for domain to come up
I0317 12:41:40.118658 629808 main.go:141] libmachine: (addons-012915) DBG | domain addons-012915 has defined MAC address 52:54:00:2b:05:f6 in network mk-addons-012915
I0317 12:41:40.119109 629808 main.go:141] libmachine: (addons-012915) DBG | unable to find current IP address of domain addons-012915 in network mk-addons-012915
I0317 12:41:40.119136 629808 main.go:141] libmachine: (addons-012915) DBG | I0317 12:41:40.119080 629830 retry.go:31] will retry after 928.899682ms: waiting for domain to come up
I0317 12:41:41.049707 629808 main.go:141] libmachine: (addons-012915) DBG | domain addons-012915 has defined MAC address 52:54:00:2b:05:f6 in network mk-addons-012915
I0317 12:41:41.050130 629808 main.go:141] libmachine: (addons-012915) DBG | unable to find current IP address of domain addons-012915 in network mk-addons-012915
I0317 12:41:41.050154 629808 main.go:141] libmachine: (addons-012915) DBG | I0317 12:41:41.050110 629830 retry.go:31] will retry after 1.035009529s: waiting for domain to come up
I0317 12:41:42.086478 629808 main.go:141] libmachine: (addons-012915) DBG | domain addons-012915 has defined MAC address 52:54:00:2b:05:f6 in network mk-addons-012915
I0317 12:41:42.086846 629808 main.go:141] libmachine: (addons-012915) DBG | unable to find current IP address of domain addons-012915 in network mk-addons-012915
I0317 12:41:42.086878 629808 main.go:141] libmachine: (addons-012915) DBG | I0317 12:41:42.086812 629830 retry.go:31] will retry after 1.159049516s: waiting for domain to come up
I0317 12:41:43.248106 629808 main.go:141] libmachine: (addons-012915) DBG | domain addons-012915 has defined MAC address 52:54:00:2b:05:f6 in network mk-addons-012915
I0317 12:41:43.248441 629808 main.go:141] libmachine: (addons-012915) DBG | unable to find current IP address of domain addons-012915 in network mk-addons-012915
I0317 12:41:43.248467 629808 main.go:141] libmachine: (addons-012915) DBG | I0317 12:41:43.248408 629830 retry.go:31] will retry after 1.261706174s: waiting for domain to come up
I0317 12:41:44.511845 629808 main.go:141] libmachine: (addons-012915) DBG | domain addons-012915 has defined MAC address 52:54:00:2b:05:f6 in network mk-addons-012915
I0317 12:41:44.512402 629808 main.go:141] libmachine: (addons-012915) DBG | unable to find current IP address of domain addons-012915 in network mk-addons-012915
I0317 12:41:44.512431 629808 main.go:141] libmachine: (addons-012915) DBG | I0317 12:41:44.512320 629830 retry.go:31] will retry after 1.687461831s: waiting for domain to come up
I0317 12:41:46.201918 629808 main.go:141] libmachine: (addons-012915) DBG | domain addons-012915 has defined MAC address 52:54:00:2b:05:f6 in network mk-addons-012915
I0317 12:41:46.202361 629808 main.go:141] libmachine: (addons-012915) DBG | unable to find current IP address of domain addons-012915 in network mk-addons-012915
I0317 12:41:46.202394 629808 main.go:141] libmachine: (addons-012915) DBG | I0317 12:41:46.202317 629830 retry.go:31] will retry after 1.948915961s: waiting for domain to come up
I0317 12:41:48.153380 629808 main.go:141] libmachine: (addons-012915) DBG | domain addons-012915 has defined MAC address 52:54:00:2b:05:f6 in network mk-addons-012915
I0317 12:41:48.153764 629808 main.go:141] libmachine: (addons-012915) DBG | unable to find current IP address of domain addons-012915 in network mk-addons-012915
I0317 12:41:48.153791 629808 main.go:141] libmachine: (addons-012915) DBG | I0317 12:41:48.153705 629830 retry.go:31] will retry after 2.589327367s: waiting for domain to come up
I0317 12:41:50.746364 629808 main.go:141] libmachine: (addons-012915) DBG | domain addons-012915 has defined MAC address 52:54:00:2b:05:f6 in network mk-addons-012915
I0317 12:41:50.746758 629808 main.go:141] libmachine: (addons-012915) DBG | unable to find current IP address of domain addons-012915 in network mk-addons-012915
I0317 12:41:50.746777 629808 main.go:141] libmachine: (addons-012915) DBG | I0317 12:41:50.746730 629830 retry.go:31] will retry after 3.250724894s: waiting for domain to come up
I0317 12:41:53.998634 629808 main.go:141] libmachine: (addons-012915) DBG | domain addons-012915 has defined MAC address 52:54:00:2b:05:f6 in network mk-addons-012915
I0317 12:41:53.999024 629808 main.go:141] libmachine: (addons-012915) DBG | unable to find current IP address of domain addons-012915 in network mk-addons-012915
I0317 12:41:53.999079 629808 main.go:141] libmachine: (addons-012915) DBG | I0317 12:41:53.999030 629830 retry.go:31] will retry after 4.576109972s: waiting for domain to come up
I0317 12:41:58.576359 629808 main.go:141] libmachine: (addons-012915) DBG | domain addons-012915 has defined MAC address 52:54:00:2b:05:f6 in network mk-addons-012915
I0317 12:41:58.576753 629808 main.go:141] libmachine: (addons-012915) found domain IP: 192.168.39.84
I0317 12:41:58.576782 629808 main.go:141] libmachine: (addons-012915) DBG | domain addons-012915 has current primary IP address 192.168.39.84 and MAC address 52:54:00:2b:05:f6 in network mk-addons-012915
I0317 12:41:58.576790 629808 main.go:141] libmachine: (addons-012915) reserving static IP address...
I0317 12:41:58.577148 629808 main.go:141] libmachine: (addons-012915) DBG | unable to find host DHCP lease matching {name: "addons-012915", mac: "52:54:00:2b:05:f6", ip: "192.168.39.84"} in network mk-addons-012915
I0317 12:41:58.651196 629808 main.go:141] libmachine: (addons-012915) reserved static IP address 192.168.39.84 for domain addons-012915
I0317 12:41:58.651245 629808 main.go:141] libmachine: (addons-012915) DBG | Getting to WaitForSSH function...
I0317 12:41:58.651255 629808 main.go:141] libmachine: (addons-012915) waiting for SSH...
I0317 12:41:58.653916 629808 main.go:141] libmachine: (addons-012915) DBG | domain addons-012915 has defined MAC address 52:54:00:2b:05:f6 in network mk-addons-012915
I0317 12:41:58.654263 629808 main.go:141] libmachine: (addons-012915) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:05:f6", ip: ""} in network mk-addons-012915: {Iface:virbr1 ExpiryTime:2025-03-17 13:41:50 +0000 UTC Type:0 Mac:52:54:00:2b:05:f6 Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:minikube Clientid:01:52:54:00:2b:05:f6}
I0317 12:41:58.654304 629808 main.go:141] libmachine: (addons-012915) DBG | domain addons-012915 has defined IP address 192.168.39.84 and MAC address 52:54:00:2b:05:f6 in network mk-addons-012915
I0317 12:41:58.654411 629808 main.go:141] libmachine: (addons-012915) DBG | Using SSH client type: external
I0317 12:41:58.654449 629808 main.go:141] libmachine: (addons-012915) DBG | Using SSH private key: /home/jenkins/minikube-integration/20539-621978/.minikube/machines/addons-012915/id_rsa (-rw-------)
I0317 12:41:58.654491 629808 main.go:141] libmachine: (addons-012915) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.84 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20539-621978/.minikube/machines/addons-012915/id_rsa -p 22] /usr/bin/ssh <nil>}
I0317 12:41:58.654508 629808 main.go:141] libmachine: (addons-012915) DBG | About to run SSH command:
I0317 12:41:58.654524 629808 main.go:141] libmachine: (addons-012915) DBG | exit 0
I0317 12:41:58.779465 629808 main.go:141] libmachine: (addons-012915) DBG | SSH cmd err, output: <nil>:
I0317 12:41:58.779759 629808 main.go:141] libmachine: (addons-012915) KVM machine creation complete
I0317 12:41:58.780047 629808 main.go:141] libmachine: (addons-012915) Calling .GetConfigRaw
I0317 12:41:58.780641 629808 main.go:141] libmachine: (addons-012915) Calling .DriverName
I0317 12:41:58.780854 629808 main.go:141] libmachine: (addons-012915) Calling .DriverName
I0317 12:41:58.781074 629808 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
I0317 12:41:58.781090 629808 main.go:141] libmachine: (addons-012915) Calling .GetState
I0317 12:41:58.782308 629808 main.go:141] libmachine: Detecting operating system of created instance...
I0317 12:41:58.782327 629808 main.go:141] libmachine: Waiting for SSH to be available...
I0317 12:41:58.782334 629808 main.go:141] libmachine: Getting to WaitForSSH function...
I0317 12:41:58.782343 629808 main.go:141] libmachine: (addons-012915) Calling .GetSSHHostname
I0317 12:41:58.784660 629808 main.go:141] libmachine: (addons-012915) DBG | domain addons-012915 has defined MAC address 52:54:00:2b:05:f6 in network mk-addons-012915
I0317 12:41:58.784988 629808 main.go:141] libmachine: (addons-012915) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:05:f6", ip: ""} in network mk-addons-012915: {Iface:virbr1 ExpiryTime:2025-03-17 13:41:50 +0000 UTC Type:0 Mac:52:54:00:2b:05:f6 Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:addons-012915 Clientid:01:52:54:00:2b:05:f6}
I0317 12:41:58.785019 629808 main.go:141] libmachine: (addons-012915) DBG | domain addons-012915 has defined IP address 192.168.39.84 and MAC address 52:54:00:2b:05:f6 in network mk-addons-012915
I0317 12:41:58.785151 629808 main.go:141] libmachine: (addons-012915) Calling .GetSSHPort
I0317 12:41:58.785327 629808 main.go:141] libmachine: (addons-012915) Calling .GetSSHKeyPath
I0317 12:41:58.785512 629808 main.go:141] libmachine: (addons-012915) Calling .GetSSHKeyPath
I0317 12:41:58.785630 629808 main.go:141] libmachine: (addons-012915) Calling .GetSSHUsername
I0317 12:41:58.785798 629808 main.go:141] libmachine: Using SSH client type: native
I0317 12:41:58.786018 629808 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil> [] 0s} 192.168.39.84 22 <nil> <nil>}
I0317 12:41:58.786030 629808 main.go:141] libmachine: About to run SSH command:
exit 0
I0317 12:41:58.894557 629808 main.go:141] libmachine: SSH cmd err, output: <nil>:
I0317 12:41:58.894581 629808 main.go:141] libmachine: Detecting the provisioner...
I0317 12:41:58.894590 629808 main.go:141] libmachine: (addons-012915) Calling .GetSSHHostname
I0317 12:41:58.897223 629808 main.go:141] libmachine: (addons-012915) DBG | domain addons-012915 has defined MAC address 52:54:00:2b:05:f6 in network mk-addons-012915
I0317 12:41:58.897595 629808 main.go:141] libmachine: (addons-012915) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:05:f6", ip: ""} in network mk-addons-012915: {Iface:virbr1 ExpiryTime:2025-03-17 13:41:50 +0000 UTC Type:0 Mac:52:54:00:2b:05:f6 Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:addons-012915 Clientid:01:52:54:00:2b:05:f6}
I0317 12:41:58.897624 629808 main.go:141] libmachine: (addons-012915) DBG | domain addons-012915 has defined IP address 192.168.39.84 and MAC address 52:54:00:2b:05:f6 in network mk-addons-012915
I0317 12:41:58.897751 629808 main.go:141] libmachine: (addons-012915) Calling .GetSSHPort
I0317 12:41:58.897953 629808 main.go:141] libmachine: (addons-012915) Calling .GetSSHKeyPath
I0317 12:41:58.898118 629808 main.go:141] libmachine: (addons-012915) Calling .GetSSHKeyPath
I0317 12:41:58.898279 629808 main.go:141] libmachine: (addons-012915) Calling .GetSSHUsername
I0317 12:41:58.898474 629808 main.go:141] libmachine: Using SSH client type: native
I0317 12:41:58.898670 629808 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil> [] 0s} 192.168.39.84 22 <nil> <nil>}
I0317 12:41:58.898680 629808 main.go:141] libmachine: About to run SSH command:
cat /etc/os-release
I0317 12:41:59.008106 629808 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
VERSION=2023.02.9-dirty
ID=buildroot
VERSION_ID=2023.02.9
PRETTY_NAME="Buildroot 2023.02.9"
I0317 12:41:59.008176 629808 main.go:141] libmachine: found compatible host: buildroot
I0317 12:41:59.008189 629808 main.go:141] libmachine: Provisioning with buildroot...
I0317 12:41:59.008200 629808 main.go:141] libmachine: (addons-012915) Calling .GetMachineName
I0317 12:41:59.008463 629808 buildroot.go:166] provisioning hostname "addons-012915"
I0317 12:41:59.008492 629808 main.go:141] libmachine: (addons-012915) Calling .GetMachineName
I0317 12:41:59.008706 629808 main.go:141] libmachine: (addons-012915) Calling .GetSSHHostname
I0317 12:41:59.011455 629808 main.go:141] libmachine: (addons-012915) DBG | domain addons-012915 has defined MAC address 52:54:00:2b:05:f6 in network mk-addons-012915
I0317 12:41:59.011845 629808 main.go:141] libmachine: (addons-012915) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:05:f6", ip: ""} in network mk-addons-012915: {Iface:virbr1 ExpiryTime:2025-03-17 13:41:50 +0000 UTC Type:0 Mac:52:54:00:2b:05:f6 Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:addons-012915 Clientid:01:52:54:00:2b:05:f6}
I0317 12:41:59.011879 629808 main.go:141] libmachine: (addons-012915) DBG | domain addons-012915 has defined IP address 192.168.39.84 and MAC address 52:54:00:2b:05:f6 in network mk-addons-012915
I0317 12:41:59.012026 629808 main.go:141] libmachine: (addons-012915) Calling .GetSSHPort
I0317 12:41:59.012226 629808 main.go:141] libmachine: (addons-012915) Calling .GetSSHKeyPath
I0317 12:41:59.012395 629808 main.go:141] libmachine: (addons-012915) Calling .GetSSHKeyPath
I0317 12:41:59.012542 629808 main.go:141] libmachine: (addons-012915) Calling .GetSSHUsername
I0317 12:41:59.012710 629808 main.go:141] libmachine: Using SSH client type: native
I0317 12:41:59.012964 629808 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil> [] 0s} 192.168.39.84 22 <nil> <nil>}
I0317 12:41:59.012979 629808 main.go:141] libmachine: About to run SSH command:
sudo hostname addons-012915 && echo "addons-012915" | sudo tee /etc/hostname
I0317 12:41:59.132159 629808 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-012915
I0317 12:41:59.132201 629808 main.go:141] libmachine: (addons-012915) Calling .GetSSHHostname
I0317 12:41:59.135100 629808 main.go:141] libmachine: (addons-012915) DBG | domain addons-012915 has defined MAC address 52:54:00:2b:05:f6 in network mk-addons-012915
I0317 12:41:59.135522 629808 main.go:141] libmachine: (addons-012915) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:05:f6", ip: ""} in network mk-addons-012915: {Iface:virbr1 ExpiryTime:2025-03-17 13:41:50 +0000 UTC Type:0 Mac:52:54:00:2b:05:f6 Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:addons-012915 Clientid:01:52:54:00:2b:05:f6}
I0317 12:41:59.135559 629808 main.go:141] libmachine: (addons-012915) DBG | domain addons-012915 has defined IP address 192.168.39.84 and MAC address 52:54:00:2b:05:f6 in network mk-addons-012915
I0317 12:41:59.135747 629808 main.go:141] libmachine: (addons-012915) Calling .GetSSHPort
I0317 12:41:59.135948 629808 main.go:141] libmachine: (addons-012915) Calling .GetSSHKeyPath
I0317 12:41:59.136132 629808 main.go:141] libmachine: (addons-012915) Calling .GetSSHKeyPath
I0317 12:41:59.136247 629808 main.go:141] libmachine: (addons-012915) Calling .GetSSHUsername
I0317 12:41:59.136410 629808 main.go:141] libmachine: Using SSH client type: native
I0317 12:41:59.136634 629808 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil> [] 0s} 192.168.39.84 22 <nil> <nil>}
I0317 12:41:59.136651 629808 main.go:141] libmachine: About to run SSH command:
if ! grep -xq '.*\saddons-012915' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-012915/g' /etc/hosts;
else
echo '127.0.1.1 addons-012915' | sudo tee -a /etc/hosts;
fi
fi
I0317 12:41:59.251490 629808 main.go:141] libmachine: SSH cmd err, output: <nil>:
I0317 12:41:59.251521 629808 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20539-621978/.minikube CaCertPath:/home/jenkins/minikube-integration/20539-621978/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20539-621978/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20539-621978/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20539-621978/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20539-621978/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20539-621978/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20539-621978/.minikube}
I0317 12:41:59.251567 629808 buildroot.go:174] setting up certificates
I0317 12:41:59.251584 629808 provision.go:84] configureAuth start
I0317 12:41:59.251598 629808 main.go:141] libmachine: (addons-012915) Calling .GetMachineName
I0317 12:41:59.251863 629808 main.go:141] libmachine: (addons-012915) Calling .GetIP
I0317 12:41:59.254773 629808 main.go:141] libmachine: (addons-012915) DBG | domain addons-012915 has defined MAC address 52:54:00:2b:05:f6 in network mk-addons-012915
I0317 12:41:59.255076 629808 main.go:141] libmachine: (addons-012915) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:05:f6", ip: ""} in network mk-addons-012915: {Iface:virbr1 ExpiryTime:2025-03-17 13:41:50 +0000 UTC Type:0 Mac:52:54:00:2b:05:f6 Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:addons-012915 Clientid:01:52:54:00:2b:05:f6}
I0317 12:41:59.255093 629808 main.go:141] libmachine: (addons-012915) DBG | domain addons-012915 has defined IP address 192.168.39.84 and MAC address 52:54:00:2b:05:f6 in network mk-addons-012915
I0317 12:41:59.255247 629808 main.go:141] libmachine: (addons-012915) Calling .GetSSHHostname
I0317 12:41:59.257261 629808 main.go:141] libmachine: (addons-012915) DBG | domain addons-012915 has defined MAC address 52:54:00:2b:05:f6 in network mk-addons-012915
I0317 12:41:59.257560 629808 main.go:141] libmachine: (addons-012915) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:05:f6", ip: ""} in network mk-addons-012915: {Iface:virbr1 ExpiryTime:2025-03-17 13:41:50 +0000 UTC Type:0 Mac:52:54:00:2b:05:f6 Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:addons-012915 Clientid:01:52:54:00:2b:05:f6}
I0317 12:41:59.257588 629808 main.go:141] libmachine: (addons-012915) DBG | domain addons-012915 has defined IP address 192.168.39.84 and MAC address 52:54:00:2b:05:f6 in network mk-addons-012915
I0317 12:41:59.257722 629808 provision.go:143] copyHostCerts
I0317 12:41:59.257790 629808 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20539-621978/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20539-621978/.minikube/ca.pem (1082 bytes)
I0317 12:41:59.257902 629808 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20539-621978/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20539-621978/.minikube/cert.pem (1123 bytes)
I0317 12:41:59.257959 629808 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20539-621978/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20539-621978/.minikube/key.pem (1675 bytes)
I0317 12:41:59.258007 629808 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20539-621978/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20539-621978/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20539-621978/.minikube/certs/ca-key.pem org=jenkins.addons-012915 san=[127.0.0.1 192.168.39.84 addons-012915 localhost minikube]
I0317 12:41:59.717849 629808 provision.go:177] copyRemoteCerts
I0317 12:41:59.717915 629808 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I0317 12:41:59.717945 629808 main.go:141] libmachine: (addons-012915) Calling .GetSSHHostname
I0317 12:41:59.720628 629808 main.go:141] libmachine: (addons-012915) DBG | domain addons-012915 has defined MAC address 52:54:00:2b:05:f6 in network mk-addons-012915
I0317 12:41:59.720963 629808 main.go:141] libmachine: (addons-012915) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:05:f6", ip: ""} in network mk-addons-012915: {Iface:virbr1 ExpiryTime:2025-03-17 13:41:50 +0000 UTC Type:0 Mac:52:54:00:2b:05:f6 Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:addons-012915 Clientid:01:52:54:00:2b:05:f6}
I0317 12:41:59.720991 629808 main.go:141] libmachine: (addons-012915) DBG | domain addons-012915 has defined IP address 192.168.39.84 and MAC address 52:54:00:2b:05:f6 in network mk-addons-012915
I0317 12:41:59.721175 629808 main.go:141] libmachine: (addons-012915) Calling .GetSSHPort
I0317 12:41:59.721379 629808 main.go:141] libmachine: (addons-012915) Calling .GetSSHKeyPath
I0317 12:41:59.721505 629808 main.go:141] libmachine: (addons-012915) Calling .GetSSHUsername
I0317 12:41:59.722126 629808 sshutil.go:53] new ssh client: &{IP:192.168.39.84 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20539-621978/.minikube/machines/addons-012915/id_rsa Username:docker}
I0317 12:41:59.804879 629808 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20539-621978/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
I0317 12:41:59.826876 629808 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20539-621978/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
I0317 12:41:59.848221 629808 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20539-621978/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
I0317 12:41:59.869105 629808 provision.go:87] duration metric: took 617.505097ms to configureAuth
I0317 12:41:59.869139 629808 buildroot.go:189] setting minikube options for container-runtime
I0317 12:41:59.869333 629808 config.go:182] Loaded profile config "addons-012915": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
I0317 12:41:59.869412 629808 main.go:141] libmachine: (addons-012915) Calling .GetSSHHostname
I0317 12:41:59.871922 629808 main.go:141] libmachine: (addons-012915) DBG | domain addons-012915 has defined MAC address 52:54:00:2b:05:f6 in network mk-addons-012915
I0317 12:41:59.872303 629808 main.go:141] libmachine: (addons-012915) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:05:f6", ip: ""} in network mk-addons-012915: {Iface:virbr1 ExpiryTime:2025-03-17 13:41:50 +0000 UTC Type:0 Mac:52:54:00:2b:05:f6 Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:addons-012915 Clientid:01:52:54:00:2b:05:f6}
I0317 12:41:59.872333 629808 main.go:141] libmachine: (addons-012915) DBG | domain addons-012915 has defined IP address 192.168.39.84 and MAC address 52:54:00:2b:05:f6 in network mk-addons-012915
I0317 12:41:59.872471 629808 main.go:141] libmachine: (addons-012915) Calling .GetSSHPort
I0317 12:41:59.872666 629808 main.go:141] libmachine: (addons-012915) Calling .GetSSHKeyPath
I0317 12:41:59.872851 629808 main.go:141] libmachine: (addons-012915) Calling .GetSSHKeyPath
I0317 12:41:59.872954 629808 main.go:141] libmachine: (addons-012915) Calling .GetSSHUsername
I0317 12:41:59.873116 629808 main.go:141] libmachine: Using SSH client type: native
I0317 12:41:59.873310 629808 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil> [] 0s} 192.168.39.84 22 <nil> <nil>}
I0317 12:41:59.873328 629808 main.go:141] libmachine: About to run SSH command:
sudo mkdir -p /etc/sysconfig && printf %s "
CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
I0317 12:42:00.092202 629808 main.go:141] libmachine: SSH cmd err, output: <nil>:
CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
I0317 12:42:00.092245 629808 main.go:141] libmachine: Checking connection to Docker...
I0317 12:42:00.092258 629808 main.go:141] libmachine: (addons-012915) Calling .GetURL
I0317 12:42:00.093695 629808 main.go:141] libmachine: (addons-012915) DBG | using libvirt version 6000000
I0317 12:42:00.095810 629808 main.go:141] libmachine: (addons-012915) DBG | domain addons-012915 has defined MAC address 52:54:00:2b:05:f6 in network mk-addons-012915
I0317 12:42:00.096173 629808 main.go:141] libmachine: (addons-012915) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:05:f6", ip: ""} in network mk-addons-012915: {Iface:virbr1 ExpiryTime:2025-03-17 13:41:50 +0000 UTC Type:0 Mac:52:54:00:2b:05:f6 Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:addons-012915 Clientid:01:52:54:00:2b:05:f6}
I0317 12:42:00.096200 629808 main.go:141] libmachine: (addons-012915) DBG | domain addons-012915 has defined IP address 192.168.39.84 and MAC address 52:54:00:2b:05:f6 in network mk-addons-012915
I0317 12:42:00.096400 629808 main.go:141] libmachine: Docker is up and running!
I0317 12:42:00.096414 629808 main.go:141] libmachine: Reticulating splines...
I0317 12:42:00.096423 629808 client.go:171] duration metric: took 24.196550306s to LocalClient.Create
I0317 12:42:00.096450 629808 start.go:167] duration metric: took 24.196627505s to libmachine.API.Create "addons-012915"
I0317 12:42:00.096465 629808 start.go:293] postStartSetup for "addons-012915" (driver="kvm2")
I0317 12:42:00.096479 629808 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0317 12:42:00.096502 629808 main.go:141] libmachine: (addons-012915) Calling .DriverName
I0317 12:42:00.096741 629808 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0317 12:42:00.096766 629808 main.go:141] libmachine: (addons-012915) Calling .GetSSHHostname
I0317 12:42:00.098721 629808 main.go:141] libmachine: (addons-012915) DBG | domain addons-012915 has defined MAC address 52:54:00:2b:05:f6 in network mk-addons-012915
I0317 12:42:00.099034 629808 main.go:141] libmachine: (addons-012915) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:05:f6", ip: ""} in network mk-addons-012915: {Iface:virbr1 ExpiryTime:2025-03-17 13:41:50 +0000 UTC Type:0 Mac:52:54:00:2b:05:f6 Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:addons-012915 Clientid:01:52:54:00:2b:05:f6}
I0317 12:42:00.099058 629808 main.go:141] libmachine: (addons-012915) DBG | domain addons-012915 has defined IP address 192.168.39.84 and MAC address 52:54:00:2b:05:f6 in network mk-addons-012915
I0317 12:42:00.099255 629808 main.go:141] libmachine: (addons-012915) Calling .GetSSHPort
I0317 12:42:00.099455 629808 main.go:141] libmachine: (addons-012915) Calling .GetSSHKeyPath
I0317 12:42:00.099615 629808 main.go:141] libmachine: (addons-012915) Calling .GetSSHUsername
I0317 12:42:00.099777 629808 sshutil.go:53] new ssh client: &{IP:192.168.39.84 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20539-621978/.minikube/machines/addons-012915/id_rsa Username:docker}
I0317 12:42:00.185091 629808 ssh_runner.go:195] Run: cat /etc/os-release
I0317 12:42:00.189105 629808 info.go:137] Remote host: Buildroot 2023.02.9
I0317 12:42:00.189137 629808 filesync.go:126] Scanning /home/jenkins/minikube-integration/20539-621978/.minikube/addons for local assets ...
I0317 12:42:00.189228 629808 filesync.go:126] Scanning /home/jenkins/minikube-integration/20539-621978/.minikube/files for local assets ...
I0317 12:42:00.189256 629808 start.go:296] duration metric: took 92.784132ms for postStartSetup
I0317 12:42:00.189291 629808 main.go:141] libmachine: (addons-012915) Calling .GetConfigRaw
I0317 12:42:00.189900 629808 main.go:141] libmachine: (addons-012915) Calling .GetIP
I0317 12:42:00.192549 629808 main.go:141] libmachine: (addons-012915) DBG | domain addons-012915 has defined MAC address 52:54:00:2b:05:f6 in network mk-addons-012915
I0317 12:42:00.192922 629808 main.go:141] libmachine: (addons-012915) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:05:f6", ip: ""} in network mk-addons-012915: {Iface:virbr1 ExpiryTime:2025-03-17 13:41:50 +0000 UTC Type:0 Mac:52:54:00:2b:05:f6 Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:addons-012915 Clientid:01:52:54:00:2b:05:f6}
I0317 12:42:00.192951 629808 main.go:141] libmachine: (addons-012915) DBG | domain addons-012915 has defined IP address 192.168.39.84 and MAC address 52:54:00:2b:05:f6 in network mk-addons-012915
I0317 12:42:00.193206 629808 profile.go:143] Saving config to /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/addons-012915/config.json ...
I0317 12:42:00.193390 629808 start.go:128] duration metric: took 24.311527321s to createHost
I0317 12:42:00.193418 629808 main.go:141] libmachine: (addons-012915) Calling .GetSSHHostname
I0317 12:42:00.195761 629808 main.go:141] libmachine: (addons-012915) DBG | domain addons-012915 has defined MAC address 52:54:00:2b:05:f6 in network mk-addons-012915
I0317 12:42:00.196087 629808 main.go:141] libmachine: (addons-012915) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:05:f6", ip: ""} in network mk-addons-012915: {Iface:virbr1 ExpiryTime:2025-03-17 13:41:50 +0000 UTC Type:0 Mac:52:54:00:2b:05:f6 Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:addons-012915 Clientid:01:52:54:00:2b:05:f6}
I0317 12:42:00.196111 629808 main.go:141] libmachine: (addons-012915) DBG | domain addons-012915 has defined IP address 192.168.39.84 and MAC address 52:54:00:2b:05:f6 in network mk-addons-012915
I0317 12:42:00.196236 629808 main.go:141] libmachine: (addons-012915) Calling .GetSSHPort
I0317 12:42:00.196395 629808 main.go:141] libmachine: (addons-012915) Calling .GetSSHKeyPath
I0317 12:42:00.196515 629808 main.go:141] libmachine: (addons-012915) Calling .GetSSHKeyPath
I0317 12:42:00.196611 629808 main.go:141] libmachine: (addons-012915) Calling .GetSSHUsername
I0317 12:42:00.196714 629808 main.go:141] libmachine: Using SSH client type: native
I0317 12:42:00.196892 629808 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil> [] 0s} 192.168.39.84 22 <nil> <nil>}
I0317 12:42:00.196900 629808 main.go:141] libmachine: About to run SSH command:
date +%s.%N
I0317 12:42:00.303756 629808 main.go:141] libmachine: SSH cmd err, output: <nil>: 1742215320.281896410
I0317 12:42:00.303779 629808 fix.go:216] guest clock: 1742215320.281896410
I0317 12:42:00.303789 629808 fix.go:229] Guest: 2025-03-17 12:42:00.28189641 +0000 UTC Remote: 2025-03-17 12:42:00.193403202 +0000 UTC m=+24.415050746 (delta=88.493208ms)
I0317 12:42:00.303809 629808 fix.go:200] guest clock delta is within tolerance: 88.493208ms
I0317 12:42:00.303821 629808 start.go:83] releasing machines lock for "addons-012915", held for 24.42203961s
I0317 12:42:00.303846 629808 main.go:141] libmachine: (addons-012915) Calling .DriverName
I0317 12:42:00.304105 629808 main.go:141] libmachine: (addons-012915) Calling .GetIP
I0317 12:42:00.308079 629808 main.go:141] libmachine: (addons-012915) DBG | domain addons-012915 has defined MAC address 52:54:00:2b:05:f6 in network mk-addons-012915
I0317 12:42:00.308414 629808 main.go:141] libmachine: (addons-012915) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:05:f6", ip: ""} in network mk-addons-012915: {Iface:virbr1 ExpiryTime:2025-03-17 13:41:50 +0000 UTC Type:0 Mac:52:54:00:2b:05:f6 Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:addons-012915 Clientid:01:52:54:00:2b:05:f6}
I0317 12:42:00.308434 629808 main.go:141] libmachine: (addons-012915) DBG | domain addons-012915 has defined IP address 192.168.39.84 and MAC address 52:54:00:2b:05:f6 in network mk-addons-012915
I0317 12:42:00.308552 629808 main.go:141] libmachine: (addons-012915) Calling .DriverName
I0317 12:42:00.309037 629808 main.go:141] libmachine: (addons-012915) Calling .DriverName
I0317 12:42:00.309200 629808 main.go:141] libmachine: (addons-012915) Calling .DriverName
I0317 12:42:00.309301 629808 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
I0317 12:42:00.309349 629808 main.go:141] libmachine: (addons-012915) Calling .GetSSHHostname
I0317 12:42:00.309356 629808 ssh_runner.go:195] Run: cat /version.json
I0317 12:42:00.309371 629808 main.go:141] libmachine: (addons-012915) Calling .GetSSHHostname
I0317 12:42:00.311994 629808 main.go:141] libmachine: (addons-012915) DBG | domain addons-012915 has defined MAC address 52:54:00:2b:05:f6 in network mk-addons-012915
I0317 12:42:00.312177 629808 main.go:141] libmachine: (addons-012915) DBG | domain addons-012915 has defined MAC address 52:54:00:2b:05:f6 in network mk-addons-012915
I0317 12:42:00.312318 629808 main.go:141] libmachine: (addons-012915) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:05:f6", ip: ""} in network mk-addons-012915: {Iface:virbr1 ExpiryTime:2025-03-17 13:41:50 +0000 UTC Type:0 Mac:52:54:00:2b:05:f6 Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:addons-012915 Clientid:01:52:54:00:2b:05:f6}
I0317 12:42:00.312348 629808 main.go:141] libmachine: (addons-012915) DBG | domain addons-012915 has defined IP address 192.168.39.84 and MAC address 52:54:00:2b:05:f6 in network mk-addons-012915
I0317 12:42:00.312514 629808 main.go:141] libmachine: (addons-012915) Calling .GetSSHPort
I0317 12:42:00.312579 629808 main.go:141] libmachine: (addons-012915) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:05:f6", ip: ""} in network mk-addons-012915: {Iface:virbr1 ExpiryTime:2025-03-17 13:41:50 +0000 UTC Type:0 Mac:52:54:00:2b:05:f6 Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:addons-012915 Clientid:01:52:54:00:2b:05:f6}
I0317 12:42:00.312604 629808 main.go:141] libmachine: (addons-012915) DBG | domain addons-012915 has defined IP address 192.168.39.84 and MAC address 52:54:00:2b:05:f6 in network mk-addons-012915
I0317 12:42:00.312713 629808 main.go:141] libmachine: (addons-012915) Calling .GetSSHKeyPath
I0317 12:42:00.312793 629808 main.go:141] libmachine: (addons-012915) Calling .GetSSHPort
I0317 12:42:00.312874 629808 main.go:141] libmachine: (addons-012915) Calling .GetSSHUsername
I0317 12:42:00.313002 629808 main.go:141] libmachine: (addons-012915) Calling .GetSSHKeyPath
I0317 12:42:00.313006 629808 sshutil.go:53] new ssh client: &{IP:192.168.39.84 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20539-621978/.minikube/machines/addons-012915/id_rsa Username:docker}
I0317 12:42:00.313125 629808 main.go:141] libmachine: (addons-012915) Calling .GetSSHUsername
I0317 12:42:00.313232 629808 sshutil.go:53] new ssh client: &{IP:192.168.39.84 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20539-621978/.minikube/machines/addons-012915/id_rsa Username:docker}
I0317 12:42:00.411767 629808 ssh_runner.go:195] Run: systemctl --version
I0317 12:42:00.417321 629808 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
I0317 12:42:00.569133 629808 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
W0317 12:42:00.574456 629808 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
I0317 12:42:00.574543 629808 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
I0317 12:42:00.589077 629808 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
I0317 12:42:00.589107 629808 start.go:495] detecting cgroup driver to use...
I0317 12:42:00.589187 629808 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
I0317 12:42:00.604322 629808 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
I0317 12:42:00.616427 629808 docker.go:217] disabling cri-docker service (if available) ...
I0317 12:42:00.616485 629808 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
I0317 12:42:00.628419 629808 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
I0317 12:42:00.640317 629808 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
I0317 12:42:00.750344 629808 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
I0317 12:42:00.870843 629808 docker.go:233] disabling docker service ...
I0317 12:42:00.870929 629808 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
I0317 12:42:00.884767 629808 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
I0317 12:42:00.897097 629808 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
I0317 12:42:01.031545 629808 ssh_runner.go:195] Run: sudo systemctl mask docker.service
I0317 12:42:01.136508 629808 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
I0317 12:42:01.149270 629808 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
" | sudo tee /etc/crictl.yaml"
I0317 12:42:01.165797 629808 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
I0317 12:42:01.165881 629808 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
I0317 12:42:01.175015 629808 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
I0317 12:42:01.175091 629808 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
I0317 12:42:01.184401 629808 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
I0317 12:42:01.193730 629808 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
I0317 12:42:01.202925 629808 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I0317 12:42:01.212622 629808 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
I0317 12:42:01.222046 629808 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
I0317 12:42:01.238009 629808 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
I0317 12:42:01.247198 629808 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I0317 12:42:01.255480 629808 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
stdout:
stderr:
sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
I0317 12:42:01.255576 629808 ssh_runner.go:195] Run: sudo modprobe br_netfilter
I0317 12:42:01.268145 629808 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I0317 12:42:01.284086 629808 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0317 12:42:01.383226 629808 ssh_runner.go:195] Run: sudo systemctl restart crio
I0317 12:42:01.470381 629808 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
I0317 12:42:01.470484 629808 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
I0317 12:42:01.474769 629808 start.go:563] Will wait 60s for crictl version
I0317 12:42:01.474845 629808 ssh_runner.go:195] Run: which crictl
I0317 12:42:01.478396 629808 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
I0317 12:42:01.510326 629808 start.go:579] Version: 0.1.0
RuntimeName: cri-o
RuntimeVersion: 1.29.1
RuntimeApiVersion: v1
I0317 12:42:01.510439 629808 ssh_runner.go:195] Run: crio --version
I0317 12:42:01.535733 629808 ssh_runner.go:195] Run: crio --version
I0317 12:42:01.562912 629808 out.go:177] * Preparing Kubernetes v1.32.2 on CRI-O 1.29.1 ...
I0317 12:42:01.564090 629808 main.go:141] libmachine: (addons-012915) Calling .GetIP
I0317 12:42:01.566854 629808 main.go:141] libmachine: (addons-012915) DBG | domain addons-012915 has defined MAC address 52:54:00:2b:05:f6 in network mk-addons-012915
I0317 12:42:01.567314 629808 main.go:141] libmachine: (addons-012915) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:05:f6", ip: ""} in network mk-addons-012915: {Iface:virbr1 ExpiryTime:2025-03-17 13:41:50 +0000 UTC Type:0 Mac:52:54:00:2b:05:f6 Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:addons-012915 Clientid:01:52:54:00:2b:05:f6}
I0317 12:42:01.567346 629808 main.go:141] libmachine: (addons-012915) DBG | domain addons-012915 has defined IP address 192.168.39.84 and MAC address 52:54:00:2b:05:f6 in network mk-addons-012915
I0317 12:42:01.567556 629808 ssh_runner.go:195] Run: grep 192.168.39.1 host.minikube.internal$ /etc/hosts
I0317 12:42:01.571386 629808 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0317 12:42:01.583014 629808 kubeadm.go:883] updating cluster {Name:addons-012915 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:addons-012915 Na
mespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.84 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMi
rror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
I0317 12:42:01.583132 629808 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime crio
I0317 12:42:01.583176 629808 ssh_runner.go:195] Run: sudo crictl images --output json
I0317 12:42:01.612830 629808 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.32.2". assuming images are not preloaded.
I0317 12:42:01.612901 629808 ssh_runner.go:195] Run: which lz4
I0317 12:42:01.616545 629808 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
I0317 12:42:01.620275 629808 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
stdout:
stderr:
stat: cannot statx '/preloaded.tar.lz4': No such file or directory
I0317 12:42:01.620307 629808 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20539-621978/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (399124012 bytes)
I0317 12:42:02.725347 629808 crio.go:462] duration metric: took 1.108859948s to copy over tarball
I0317 12:42:02.725449 629808 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
I0317 12:42:04.776691 629808 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.051203848s)
I0317 12:42:04.776729 629808 crio.go:469] duration metric: took 2.051345132s to extract the tarball
I0317 12:42:04.776741 629808 ssh_runner.go:146] rm: /preloaded.tar.lz4
I0317 12:42:04.812682 629808 ssh_runner.go:195] Run: sudo crictl images --output json
I0317 12:42:04.849626 629808 crio.go:514] all images are preloaded for cri-o runtime.
I0317 12:42:04.849655 629808 cache_images.go:84] Images are preloaded, skipping loading
I0317 12:42:04.849664 629808 kubeadm.go:934] updating node { 192.168.39.84 8443 v1.32.2 crio true true} ...
I0317 12:42:04.849766 629808 kubeadm.go:946] kubelet [Unit]
Wants=crio.service
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.32.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-012915 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.84
[Install]
config:
{KubernetesVersion:v1.32.2 ClusterName:addons-012915 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
I0317 12:42:04.849831 629808 ssh_runner.go:195] Run: crio config
I0317 12:42:04.893223 629808 cni.go:84] Creating CNI manager for ""
I0317 12:42:04.893248 629808 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
I0317 12:42:04.893263 629808 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
I0317 12:42:04.893299 629808 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.84 APIServerPort:8443 KubernetesVersion:v1.32.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-012915 NodeName:addons-012915 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.84"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.84 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kube
rnetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
I0317 12:42:04.893448 629808 kubeadm.go:195] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta4
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.39.84
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: unix:///var/run/crio/crio.sock
name: "addons-012915"
kubeletExtraArgs:
- name: "node-ip"
value: "192.168.39.84"
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta4
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.168.39.84"]
extraArgs:
- name: "enable-admission-plugins"
value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
- name: "allocate-node-cidrs"
value: "true"
- name: "leader-elect"
value: "false"
scheduler:
extraArgs:
- name: "leader-elect"
value: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
etcd:
local:
dataDir: /var/lib/minikube/etcd
extraArgs:
- name: "proxy-refresh-interval"
value: "70000"
kubernetesVersion: v1.32.2
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: cgroupfs
containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
hairpinMode: hairpin-veth
runtimeRequestTimeout: 15m
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%"
nodefs.inodesFree: "0%"
imagefs.available: "0%"
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
tcpCloseWaitTimeout: 0s
I0317 12:42:04.893510 629808 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.2
I0317 12:42:04.902894 629808 binaries.go:44] Found k8s binaries, skipping transfer
I0317 12:42:04.902960 629808 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I0317 12:42:04.911411 629808 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
I0317 12:42:04.925933 629808 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
I0317 12:42:04.940481 629808 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2290 bytes)
I0317 12:42:04.954829 629808 ssh_runner.go:195] Run: grep 192.168.39.84 control-plane.minikube.internal$ /etc/hosts
I0317 12:42:04.958146 629808 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.84 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0317 12:42:04.968628 629808 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0317 12:42:05.094744 629808 ssh_runner.go:195] Run: sudo systemctl start kubelet
I0317 12:42:05.110683 629808 certs.go:68] Setting up /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/addons-012915 for IP: 192.168.39.84
I0317 12:42:05.110714 629808 certs.go:194] generating shared ca certs ...
I0317 12:42:05.110742 629808 certs.go:226] acquiring lock for ca certs: {Name:mk3605ede7f6a7f18b88f72b01e6c88954de0ac9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0317 12:42:05.110931 629808 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/20539-621978/.minikube/ca.key
I0317 12:42:05.379441 629808 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20539-621978/.minikube/ca.crt ...
I0317 12:42:05.379473 629808 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20539-621978/.minikube/ca.crt: {Name:mk2306d0b5e6b3bdf09b5ca5ba5b5152a8f33e5c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0317 12:42:05.379664 629808 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20539-621978/.minikube/ca.key ...
I0317 12:42:05.379676 629808 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20539-621978/.minikube/ca.key: {Name:mkb98ae874d2a940cd7999188309d5d4cc1e9840 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0317 12:42:05.379750 629808 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20539-621978/.minikube/proxy-client-ca.key
I0317 12:42:05.700832 629808 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20539-621978/.minikube/proxy-client-ca.crt ...
I0317 12:42:05.700866 629808 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20539-621978/.minikube/proxy-client-ca.crt: {Name:mk83b0d2bb202059ad3d6722f9760f4c15d9c03f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0317 12:42:05.701026 629808 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20539-621978/.minikube/proxy-client-ca.key ...
I0317 12:42:05.701037 629808 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20539-621978/.minikube/proxy-client-ca.key: {Name:mk602d23e5060990faa0c18974e298bb57706e45 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0317 12:42:05.701105 629808 certs.go:256] generating profile certs ...
I0317 12:42:05.701163 629808 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/addons-012915/client.key
I0317 12:42:05.701177 629808 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/addons-012915/client.crt with IP's: []
I0317 12:42:05.801791 629808 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/addons-012915/client.crt ...
I0317 12:42:05.801824 629808 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/addons-012915/client.crt: {Name:mka889b589d3c797ca759bcd90957695b79ec05d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0317 12:42:05.801982 629808 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/addons-012915/client.key ...
I0317 12:42:05.801991 629808 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/addons-012915/client.key: {Name:mk5aaafc7ac86a71fc676105265d7773ed4cfc8c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0317 12:42:05.802057 629808 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/addons-012915/apiserver.key.a0ed7607
I0317 12:42:05.802075 629808 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/addons-012915/apiserver.crt.a0ed7607 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.84]
I0317 12:42:05.900875 629808 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/addons-012915/apiserver.crt.a0ed7607 ...
I0317 12:42:05.900907 629808 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/addons-012915/apiserver.crt.a0ed7607: {Name:mk59b6f012ba178489c0edd13e4415d60ccfb251 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0317 12:42:05.901067 629808 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/addons-012915/apiserver.key.a0ed7607 ...
I0317 12:42:05.901080 629808 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/addons-012915/apiserver.key.a0ed7607: {Name:mk733e1f058e5871887fb33ce0e670b37e9cd10d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0317 12:42:05.901207 629808 certs.go:381] copying /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/addons-012915/apiserver.crt.a0ed7607 -> /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/addons-012915/apiserver.crt
I0317 12:42:05.901302 629808 certs.go:385] copying /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/addons-012915/apiserver.key.a0ed7607 -> /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/addons-012915/apiserver.key
I0317 12:42:05.901362 629808 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/addons-012915/proxy-client.key
I0317 12:42:05.901382 629808 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/addons-012915/proxy-client.crt with IP's: []
I0317 12:42:06.112847 629808 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/addons-012915/proxy-client.crt ...
I0317 12:42:06.112879 629808 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/addons-012915/proxy-client.crt: {Name:mkb27a59e02d9b31ea54c91b5510eee6e4048918 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0317 12:42:06.113056 629808 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/addons-012915/proxy-client.key ...
I0317 12:42:06.113092 629808 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/addons-012915/proxy-client.key: {Name:mk8a39b41639853a86ca6dd154a950a5dbf083dc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0317 12:42:06.113315 629808 certs.go:484] found cert: /home/jenkins/minikube-integration/20539-621978/.minikube/certs/ca-key.pem (1675 bytes)
I0317 12:42:06.113362 629808 certs.go:484] found cert: /home/jenkins/minikube-integration/20539-621978/.minikube/certs/ca.pem (1082 bytes)
I0317 12:42:06.113397 629808 certs.go:484] found cert: /home/jenkins/minikube-integration/20539-621978/.minikube/certs/cert.pem (1123 bytes)
I0317 12:42:06.113428 629808 certs.go:484] found cert: /home/jenkins/minikube-integration/20539-621978/.minikube/certs/key.pem (1675 bytes)
I0317 12:42:06.114250 629808 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20539-621978/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I0317 12:42:06.136735 629808 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20539-621978/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
I0317 12:42:06.157459 629808 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20539-621978/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I0317 12:42:06.178356 629808 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20539-621978/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
I0317 12:42:06.199169 629808 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/addons-012915/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
I0317 12:42:06.220060 629808 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/addons-012915/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
I0317 12:42:06.241302 629808 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/addons-012915/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I0317 12:42:06.262133 629808 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20539-621978/.minikube/profiles/addons-012915/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
I0317 12:42:06.282598 629808 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20539-621978/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I0317 12:42:06.302963 629808 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
I0317 12:42:06.317290 629808 ssh_runner.go:195] Run: openssl version
I0317 12:42:06.322385 629808 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I0317 12:42:06.331944 629808 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I0317 12:42:06.335868 629808 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 17 12:42 /usr/share/ca-certificates/minikubeCA.pem
I0317 12:42:06.335925 629808 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I0317 12:42:06.341035 629808 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I0317 12:42:06.350789 629808 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
I0317 12:42:06.354231 629808 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
stdout:
stderr:
stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
I0317 12:42:06.354297 629808 kubeadm.go:392] StartCluster: {Name:addons-012915 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1741860993-20523@sha256:cd976907fa4d517c84fff1e5ef773b9fb3c738c4e1ded824ea5133470a66e185 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:addons-012915 Names
pace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.84 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirro
r: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0317 12:42:06.354376 629808 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
I0317 12:42:06.354428 629808 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
I0317 12:42:06.386460 629808 cri.go:89] found id: ""
I0317 12:42:06.386562 629808 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I0317 12:42:06.395386 629808 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I0317 12:42:06.403673 629808 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I0317 12:42:06.411982 629808 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I0317 12:42:06.412001 629808 kubeadm.go:157] found existing configuration files:
I0317 12:42:06.412040 629808 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
I0317 12:42:06.420062 629808 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/admin.conf: No such file or directory
I0317 12:42:06.420125 629808 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
I0317 12:42:06.428260 629808 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
I0317 12:42:06.436112 629808 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/kubelet.conf: No such file or directory
I0317 12:42:06.436164 629808 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
I0317 12:42:06.444055 629808 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
I0317 12:42:06.452100 629808 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/controller-manager.conf: No such file or directory
I0317 12:42:06.452140 629808 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
I0317 12:42:06.460408 629808 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
I0317 12:42:06.468180 629808 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/scheduler.conf: No such file or directory
I0317 12:42:06.468220 629808 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
I0317 12:42:06.476376 629808 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
I0317 12:42:06.525854 629808 kubeadm.go:310] [init] Using Kubernetes version: v1.32.2
I0317 12:42:06.525968 629808 kubeadm.go:310] [preflight] Running pre-flight checks
I0317 12:42:06.614805 629808 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
I0317 12:42:06.614972 629808 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
I0317 12:42:06.615078 629808 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
I0317 12:42:06.622002 629808 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
I0317 12:42:06.674280 629808 out.go:235] - Generating certificates and keys ...
I0317 12:42:06.674406 629808 kubeadm.go:310] [certs] Using existing ca certificate authority
I0317 12:42:06.674481 629808 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
I0317 12:42:06.727095 629808 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
I0317 12:42:06.903647 629808 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
I0317 12:42:07.175177 629808 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
I0317 12:42:07.322027 629808 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
I0317 12:42:07.494521 629808 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
I0317 12:42:07.494655 629808 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-012915 localhost] and IPs [192.168.39.84 127.0.0.1 ::1]
I0317 12:42:07.743681 629808 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
I0317 12:42:07.743852 629808 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-012915 localhost] and IPs [192.168.39.84 127.0.0.1 ::1]
I0317 12:42:08.031224 629808 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
I0317 12:42:08.255766 629808 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
I0317 12:42:08.393770 629808 kubeadm.go:310] [certs] Generating "sa" key and public key
I0317 12:42:08.393843 629808 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
I0317 12:42:08.504444 629808 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
I0317 12:42:08.582587 629808 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
I0317 12:42:08.727062 629808 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
I0317 12:42:08.905914 629808 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
I0317 12:42:09.025638 629808 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
I0317 12:42:09.026109 629808 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
I0317 12:42:09.028540 629808 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
I0317 12:42:09.100251 629808 out.go:235] - Booting up control plane ...
I0317 12:42:09.100392 629808 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
I0317 12:42:09.100458 629808 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
I0317 12:42:09.100579 629808 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
I0317 12:42:09.100675 629808 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
I0317 12:42:09.100759 629808 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
I0317 12:42:09.100835 629808 kubeadm.go:310] [kubelet-start] Starting the kubelet
I0317 12:42:09.178731 629808 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
I0317 12:42:09.178869 629808 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
I0317 12:42:09.680249 629808 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.983227ms
I0317 12:42:09.680361 629808 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
I0317 12:42:14.679505 629808 kubeadm.go:310] [api-check] The API server is healthy after 5.000948176s
I0317 12:42:14.690733 629808 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
I0317 12:42:14.706164 629808 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
I0317 12:42:14.736742 629808 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
I0317 12:42:14.736933 629808 kubeadm.go:310] [mark-control-plane] Marking the node addons-012915 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
I0317 12:42:14.748800 629808 kubeadm.go:310] [bootstrap-token] Using token: gkbgr9.pbwmys15tasd3j3c
I0317 12:42:14.750107 629808 out.go:235] - Configuring RBAC rules ...
I0317 12:42:14.750283 629808 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
I0317 12:42:14.765340 629808 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
I0317 12:42:14.773817 629808 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
I0317 12:42:14.776778 629808 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
I0317 12:42:14.780256 629808 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
I0317 12:42:14.786860 629808 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
I0317 12:42:15.083348 629808 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
I0317 12:42:15.514277 629808 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
I0317 12:42:16.083840 629808 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
I0317 12:42:16.083869 629808 kubeadm.go:310]
I0317 12:42:16.083944 629808 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
I0317 12:42:16.083951 629808 kubeadm.go:310]
I0317 12:42:16.084035 629808 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
I0317 12:42:16.084044 629808 kubeadm.go:310]
I0317 12:42:16.084089 629808 kubeadm.go:310] mkdir -p $HOME/.kube
I0317 12:42:16.084183 629808 kubeadm.go:310] sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
I0317 12:42:16.084292 629808 kubeadm.go:310] sudo chown $(id -u):$(id -g) $HOME/.kube/config
I0317 12:42:16.084322 629808 kubeadm.go:310]
I0317 12:42:16.084410 629808 kubeadm.go:310] Alternatively, if you are the root user, you can run:
I0317 12:42:16.084421 629808 kubeadm.go:310]
I0317 12:42:16.084492 629808 kubeadm.go:310] export KUBECONFIG=/etc/kubernetes/admin.conf
I0317 12:42:16.084501 629808 kubeadm.go:310]
I0317 12:42:16.084572 629808 kubeadm.go:310] You should now deploy a pod network to the cluster.
I0317 12:42:16.084695 629808 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
I0317 12:42:16.084810 629808 kubeadm.go:310] https://kubernetes.io/docs/concepts/cluster-administration/addons/
I0317 12:42:16.084828 629808 kubeadm.go:310]
I0317 12:42:16.084950 629808 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
I0317 12:42:16.085055 629808 kubeadm.go:310] and service account keys on each node and then running the following as root:
I0317 12:42:16.085069 629808 kubeadm.go:310]
I0317 12:42:16.085191 629808 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token gkbgr9.pbwmys15tasd3j3c \
I0317 12:42:16.085351 629808 kubeadm.go:310] --discovery-token-ca-cert-hash sha256:184403c1d467288ab6c70cdb054c3ce4e3cf50493193e7105288b2f0f121e1d7 \
I0317 12:42:16.085380 629808 kubeadm.go:310] --control-plane
I0317 12:42:16.085387 629808 kubeadm.go:310]
I0317 12:42:16.085482 629808 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
I0317 12:42:16.085495 629808 kubeadm.go:310]
I0317 12:42:16.085598 629808 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token gkbgr9.pbwmys15tasd3j3c \
I0317 12:42:16.085733 629808 kubeadm.go:310] --discovery-token-ca-cert-hash sha256:184403c1d467288ab6c70cdb054c3ce4e3cf50493193e7105288b2f0f121e1d7
I0317 12:42:16.086060 629808 kubeadm.go:310] [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
I0317 12:42:16.086100 629808 cni.go:84] Creating CNI manager for ""
I0317 12:42:16.086122 629808 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
I0317 12:42:16.088651 629808 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
I0317 12:42:16.089943 629808 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
I0317 12:42:16.100164 629808 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
I0317 12:42:16.117778 629808 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
I0317 12:42:16.117949 629808 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
I0317 12:42:16.117959 629808 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-012915 minikube.k8s.io/updated_at=2025_03_17T12_42_16_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=5a6f3d20e78a9ae03fc65e3f2e727d0ae0107b3c minikube.k8s.io/name=addons-012915 minikube.k8s.io/primary=true
I0317 12:42:16.149435 629808 ops.go:34] apiserver oom_adj: -16
I0317 12:42:16.249953 629808 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0317 12:42:16.750311 629808 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0317 12:42:17.250948 629808 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0317 12:42:17.750360 629808 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0317 12:42:18.250729 629808 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0317 12:42:18.751047 629808 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0317 12:42:19.250436 629808 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0317 12:42:19.750852 629808 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0317 12:42:19.825690 629808 kubeadm.go:1113] duration metric: took 3.707804893s to wait for elevateKubeSystemPrivileges
I0317 12:42:19.825736 629808 kubeadm.go:394] duration metric: took 13.47144427s to StartCluster
I0317 12:42:19.825764 629808 settings.go:142] acquiring lock: {Name:mk68edabab79c8a4d0c2b3888b58e49482450002 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0317 12:42:19.825918 629808 settings.go:150] Updating kubeconfig: /home/jenkins/minikube-integration/20539-621978/kubeconfig
I0317 12:42:19.826573 629808 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20539-621978/kubeconfig: {Name:mka1e8fe47944618b71f5d843879309ae618dc99 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0317 12:42:19.826839 629808 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
I0317 12:42:19.826860 629808 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.84 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}
I0317 12:42:19.826901 629808 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
I0317 12:42:19.827033 629808 addons.go:69] Setting cloud-spanner=true in profile "addons-012915"
I0317 12:42:19.827043 629808 addons.go:69] Setting yakd=true in profile "addons-012915"
I0317 12:42:19.827050 629808 addons.go:69] Setting amd-gpu-device-plugin=true in profile "addons-012915"
I0317 12:42:19.827064 629808 addons.go:238] Setting addon cloud-spanner=true in "addons-012915"
I0317 12:42:19.827070 629808 addons.go:238] Setting addon amd-gpu-device-plugin=true in "addons-012915"
I0317 12:42:19.827087 629808 addons.go:69] Setting storage-provisioner=true in profile "addons-012915"
I0317 12:42:19.827100 629808 config.go:182] Loaded profile config "addons-012915": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
I0317 12:42:19.827110 629808 addons.go:238] Setting addon storage-provisioner=true in "addons-012915"
I0317 12:42:19.827114 629808 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-012915"
I0317 12:42:19.827125 629808 host.go:66] Checking if "addons-012915" exists ...
I0317 12:42:19.827132 629808 addons.go:69] Setting volcano=true in profile "addons-012915"
I0317 12:42:19.827135 629808 host.go:66] Checking if "addons-012915" exists ...
I0317 12:42:19.827145 629808 addons.go:238] Setting addon volcano=true in "addons-012915"
I0317 12:42:19.827091 629808 addons.go:69] Setting metrics-server=true in profile "addons-012915"
I0317 12:42:19.827158 629808 host.go:66] Checking if "addons-012915" exists ...
I0317 12:42:19.827163 629808 addons.go:238] Setting addon csi-hostpath-driver=true in "addons-012915"
I0317 12:42:19.827163 629808 addons.go:238] Setting addon metrics-server=true in "addons-012915"
I0317 12:42:19.827192 629808 host.go:66] Checking if "addons-012915" exists ...
I0317 12:42:19.827194 629808 host.go:66] Checking if "addons-012915" exists ...
I0317 12:42:19.827772 629808 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-012915"
I0317 12:42:19.827804 629808 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-012915"
I0317 12:42:19.827939 629808 addons.go:238] Setting addon yakd=true in "addons-012915"
I0317 12:42:19.827963 629808 addons.go:69] Setting volumesnapshots=true in profile "addons-012915"
I0317 12:42:19.827980 629808 host.go:66] Checking if "addons-012915" exists ...
I0317 12:42:19.827981 629808 addons.go:238] Setting addon volumesnapshots=true in "addons-012915"
I0317 12:42:19.828008 629808 host.go:66] Checking if "addons-012915" exists ...
I0317 12:42:19.828051 629808 addons.go:69] Setting ingress-dns=true in profile "addons-012915"
I0317 12:42:19.828067 629808 addons.go:238] Setting addon ingress-dns=true in "addons-012915"
I0317 12:42:19.828098 629808 host.go:66] Checking if "addons-012915" exists ...
I0317 12:42:19.828351 629808 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0317 12:42:19.828406 629808 main.go:141] libmachine: Launching plugin server for driver kvm2
I0317 12:42:19.828457 629808 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0317 12:42:19.828501 629808 main.go:141] libmachine: Launching plugin server for driver kvm2
I0317 12:42:19.828541 629808 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0317 12:42:19.828558 629808 addons.go:69] Setting default-storageclass=true in profile "addons-012915"
I0317 12:42:19.828582 629808 main.go:141] libmachine: Launching plugin server for driver kvm2
I0317 12:42:19.828592 629808 addons.go:69] Setting gcp-auth=true in profile "addons-012915"
I0317 12:42:19.828612 629808 mustload.go:65] Loading cluster: addons-012915
I0317 12:42:19.828811 629808 config.go:182] Loaded profile config "addons-012915": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
I0317 12:42:19.829005 629808 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0317 12:42:19.829039 629808 main.go:141] libmachine: Launching plugin server for driver kvm2
I0317 12:42:19.829154 629808 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0317 12:42:19.829181 629808 main.go:141] libmachine: Launching plugin server for driver kvm2
I0317 12:42:19.828582 629808 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-012915"
I0317 12:42:19.830132 629808 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0317 12:42:19.830169 629808 main.go:141] libmachine: Launching plugin server for driver kvm2
I0317 12:42:19.830815 629808 addons.go:69] Setting ingress=true in profile "addons-012915"
I0317 12:42:19.830869 629808 addons.go:238] Setting addon ingress=true in "addons-012915"
I0317 12:42:19.830900 629808 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0317 12:42:19.830952 629808 main.go:141] libmachine: Launching plugin server for driver kvm2
I0317 12:42:19.830959 629808 host.go:66] Checking if "addons-012915" exists ...
I0317 12:42:19.828542 629808 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0317 12:42:19.835846 629808 main.go:141] libmachine: Launching plugin server for driver kvm2
I0317 12:42:19.836947 629808 out.go:177] * Verifying Kubernetes components...
I0317 12:42:19.827115 629808 host.go:66] Checking if "addons-012915" exists ...
I0317 12:42:19.836989 629808 addons.go:69] Setting registry=true in profile "addons-012915"
I0317 12:42:19.837270 629808 addons.go:238] Setting addon registry=true in "addons-012915"
I0317 12:42:19.837308 629808 host.go:66] Checking if "addons-012915" exists ...
I0317 12:42:19.837426 629808 addons.go:69] Setting inspektor-gadget=true in profile "addons-012915"
I0317 12:42:19.837446 629808 addons.go:238] Setting addon inspektor-gadget=true in "addons-012915"
I0317 12:42:19.837470 629808 host.go:66] Checking if "addons-012915" exists ...
I0317 12:42:19.838157 629808 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0317 12:42:19.838215 629808 main.go:141] libmachine: Launching plugin server for driver kvm2
I0317 12:42:19.838752 629808 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0317 12:42:19.838791 629808 main.go:141] libmachine: Launching plugin server for driver kvm2
I0317 12:42:19.839167 629808 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0317 12:42:19.839195 629808 main.go:141] libmachine: Launching plugin server for driver kvm2
I0317 12:42:19.842685 629808 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0317 12:42:19.836974 629808 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-012915"
I0317 12:42:19.842826 629808 addons.go:238] Setting addon nvidia-device-plugin=true in "addons-012915"
I0317 12:42:19.842861 629808 host.go:66] Checking if "addons-012915" exists ...
I0317 12:42:19.852179 629808 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0317 12:42:19.852215 629808 main.go:141] libmachine: Launching plugin server for driver kvm2
I0317 12:42:19.852371 629808 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39351
I0317 12:42:19.852533 629808 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38849
I0317 12:42:19.852632 629808 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35775
I0317 12:42:19.853167 629808 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0317 12:42:19.853218 629808 main.go:141] libmachine: Launching plugin server for driver kvm2
I0317 12:42:19.859871 629808 main.go:141] libmachine: () Calling .GetVersion
I0317 12:42:19.860222 629808 main.go:141] libmachine: () Calling .GetVersion
I0317 12:42:19.860649 629808 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41209
I0317 12:42:19.860800 629808 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0317 12:42:19.860810 629808 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0317 12:42:19.860835 629808 main.go:141] libmachine: Launching plugin server for driver kvm2
I0317 12:42:19.860945 629808 main.go:141] libmachine: Launching plugin server for driver kvm2
I0317 12:42:19.860985 629808 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0317 12:42:19.861231 629808 main.go:141] libmachine: Launching plugin server for driver kvm2
I0317 12:42:19.865822 629808 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33375
I0317 12:42:19.866268 629808 main.go:141] libmachine: Using API Version 1
I0317 12:42:19.866289 629808 main.go:141] libmachine: () Calling .SetConfigRaw
I0317 12:42:19.866339 629808 main.go:141] libmachine: Using API Version 1
I0317 12:42:19.866349 629808 main.go:141] libmachine: () Calling .SetConfigRaw
I0317 12:42:19.866499 629808 main.go:141] libmachine: () Calling .GetVersion
I0317 12:42:19.866579 629808 main.go:141] libmachine: () Calling .GetVersion
I0317 12:42:19.866762 629808 main.go:141] libmachine: () Calling .GetMachineName
I0317 12:42:19.866829 629808 main.go:141] libmachine: () Calling .GetVersion
I0317 12:42:19.867398 629808 main.go:141] libmachine: Using API Version 1
I0317 12:42:19.867417 629808 main.go:141] libmachine: () Calling .SetConfigRaw
I0317 12:42:19.867495 629808 main.go:141] libmachine: () Calling .GetMachineName
I0317 12:42:19.867866 629808 main.go:141] libmachine: Using API Version 1
I0317 12:42:19.867890 629808 main.go:141] libmachine: () Calling .SetConfigRaw
I0317 12:42:19.868441 629808 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0317 12:42:19.868478 629808 main.go:141] libmachine: Launching plugin server for driver kvm2
I0317 12:42:19.882595 629808 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39105
I0317 12:42:19.882625 629808 main.go:141] libmachine: () Calling .GetMachineName
I0317 12:42:19.882595 629808 main.go:141] libmachine: () Calling .GetMachineName
I0317 12:42:19.882842 629808 main.go:141] libmachine: Using API Version 1
I0317 12:42:19.882855 629808 main.go:141] libmachine: () Calling .SetConfigRaw
I0317 12:42:19.882844 629808 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33799
I0317 12:42:19.882917 629808 main.go:141] libmachine: (addons-012915) Calling .GetState
I0317 12:42:19.883004 629808 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0317 12:42:19.883032 629808 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45473
I0317 12:42:19.883055 629808 main.go:141] libmachine: Launching plugin server for driver kvm2
I0317 12:42:19.883291 629808 main.go:141] libmachine: () Calling .GetMachineName
I0317 12:42:19.883668 629808 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0317 12:42:19.883692 629808 main.go:141] libmachine: Launching plugin server for driver kvm2
I0317 12:42:19.883843 629808 main.go:141] libmachine: () Calling .GetVersion
I0317 12:42:19.884024 629808 main.go:141] libmachine: () Calling .GetVersion
I0317 12:42:19.884118 629808 main.go:141] libmachine: (addons-012915) Calling .GetState
I0317 12:42:19.884467 629808 main.go:141] libmachine: Using API Version 1
I0317 12:42:19.884485 629808 main.go:141] libmachine: () Calling .SetConfigRaw
I0317 12:42:19.884560 629808 main.go:141] libmachine: () Calling .GetVersion
I0317 12:42:19.884585 629808 main.go:141] libmachine: Using API Version 1
I0317 12:42:19.884600 629808 main.go:141] libmachine: () Calling .SetConfigRaw
I0317 12:42:19.884878 629808 main.go:141] libmachine: () Calling .GetMachineName
I0317 12:42:19.885011 629808 main.go:141] libmachine: () Calling .GetMachineName
I0317 12:42:19.885427 629808 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0317 12:42:19.885464 629808 main.go:141] libmachine: Launching plugin server for driver kvm2
I0317 12:42:19.885775 629808 host.go:66] Checking if "addons-012915" exists ...
I0317 12:42:19.886130 629808 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0317 12:42:19.886150 629808 main.go:141] libmachine: Launching plugin server for driver kvm2
I0317 12:42:19.886668 629808 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0317 12:42:19.886687 629808 main.go:141] libmachine: Launching plugin server for driver kvm2
I0317 12:42:19.887884 629808 addons.go:238] Setting addon storage-provisioner-rancher=true in "addons-012915"
I0317 12:42:19.887936 629808 host.go:66] Checking if "addons-012915" exists ...
I0317 12:42:19.888282 629808 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0317 12:42:19.888329 629808 main.go:141] libmachine: Launching plugin server for driver kvm2
I0317 12:42:19.891993 629808 main.go:141] libmachine: Using API Version 1
I0317 12:42:19.892014 629808 main.go:141] libmachine: () Calling .SetConfigRaw
I0317 12:42:19.892695 629808 main.go:141] libmachine: () Calling .GetMachineName
I0317 12:42:19.892905 629808 main.go:141] libmachine: (addons-012915) Calling .GetState
I0317 12:42:19.895518 629808 addons.go:238] Setting addon default-storageclass=true in "addons-012915"
I0317 12:42:19.895598 629808 host.go:66] Checking if "addons-012915" exists ...
I0317 12:42:19.895963 629808 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0317 12:42:19.896001 629808 main.go:141] libmachine: Launching plugin server for driver kvm2
I0317 12:42:19.901208 629808 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33635
I0317 12:42:19.901854 629808 main.go:141] libmachine: () Calling .GetVersion
I0317 12:42:19.902376 629808 main.go:141] libmachine: Using API Version 1
I0317 12:42:19.902398 629808 main.go:141] libmachine: () Calling .SetConfigRaw
I0317 12:42:19.902822 629808 main.go:141] libmachine: () Calling .GetMachineName
I0317 12:42:19.903362 629808 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0317 12:42:19.903405 629808 main.go:141] libmachine: Launching plugin server for driver kvm2
I0317 12:42:19.903635 629808 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37573
I0317 12:42:19.903998 629808 main.go:141] libmachine: () Calling .GetVersion
I0317 12:42:19.904452 629808 main.go:141] libmachine: Using API Version 1
I0317 12:42:19.904477 629808 main.go:141] libmachine: () Calling .SetConfigRaw
I0317 12:42:19.904857 629808 main.go:141] libmachine: () Calling .GetMachineName
I0317 12:42:19.905354 629808 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0317 12:42:19.905400 629808 main.go:141] libmachine: Launching plugin server for driver kvm2
I0317 12:42:19.905596 629808 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44539
I0317 12:42:19.906174 629808 main.go:141] libmachine: () Calling .GetVersion
I0317 12:42:19.906731 629808 main.go:141] libmachine: Using API Version 1
I0317 12:42:19.906747 629808 main.go:141] libmachine: () Calling .SetConfigRaw
I0317 12:42:19.907438 629808 main.go:141] libmachine: () Calling .GetMachineName
I0317 12:42:19.908055 629808 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0317 12:42:19.908095 629808 main.go:141] libmachine: Launching plugin server for driver kvm2
I0317 12:42:19.917817 629808 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46071
I0317 12:42:19.918439 629808 main.go:141] libmachine: () Calling .GetVersion
I0317 12:42:19.918998 629808 main.go:141] libmachine: Using API Version 1
I0317 12:42:19.919018 629808 main.go:141] libmachine: () Calling .SetConfigRaw
I0317 12:42:19.919423 629808 main.go:141] libmachine: () Calling .GetMachineName
I0317 12:42:19.919990 629808 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0317 12:42:19.920035 629808 main.go:141] libmachine: Launching plugin server for driver kvm2
I0317 12:42:19.920245 629808 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34123
I0317 12:42:19.920738 629808 main.go:141] libmachine: () Calling .GetVersion
I0317 12:42:19.921250 629808 main.go:141] libmachine: Using API Version 1
I0317 12:42:19.921272 629808 main.go:141] libmachine: () Calling .SetConfigRaw
I0317 12:42:19.921460 629808 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38587
I0317 12:42:19.921904 629808 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39827
I0317 12:42:19.922045 629808 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42243
I0317 12:42:19.922252 629808 main.go:141] libmachine: () Calling .GetVersion
I0317 12:42:19.922388 629808 main.go:141] libmachine: () Calling .GetMachineName
I0317 12:42:19.922418 629808 main.go:141] libmachine: () Calling .GetVersion
I0317 12:42:19.922787 629808 main.go:141] libmachine: Using API Version 1
I0317 12:42:19.922805 629808 main.go:141] libmachine: () Calling .SetConfigRaw
I0317 12:42:19.922936 629808 main.go:141] libmachine: Using API Version 1
I0317 12:42:19.922946 629808 main.go:141] libmachine: () Calling .SetConfigRaw
I0317 12:42:19.923006 629808 main.go:141] libmachine: () Calling .GetVersion
I0317 12:42:19.923411 629808 main.go:141] libmachine: Using API Version 1
I0317 12:42:19.923428 629808 main.go:141] libmachine: () Calling .SetConfigRaw
I0317 12:42:19.923492 629808 main.go:141] libmachine: () Calling .GetMachineName
I0317 12:42:19.923851 629808 main.go:141] libmachine: () Calling .GetMachineName
I0317 12:42:19.924038 629808 main.go:141] libmachine: () Calling .GetMachineName
I0317 12:42:19.924064 629808 main.go:141] libmachine: (addons-012915) Calling .GetState
I0317 12:42:19.924225 629808 main.go:141] libmachine: (addons-012915) Calling .GetState
I0317 12:42:19.925736 629808 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40501
I0317 12:42:19.926245 629808 main.go:141] libmachine: () Calling .GetVersion
I0317 12:42:19.926454 629808 main.go:141] libmachine: (addons-012915) Calling .DriverName
I0317 12:42:19.926692 629808 main.go:141] libmachine: Using API Version 1
I0317 12:42:19.926710 629808 main.go:141] libmachine: () Calling .SetConfigRaw
I0317 12:42:19.927080 629808 main.go:141] libmachine: () Calling .GetMachineName
I0317 12:42:19.927735 629808 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0317 12:42:19.927779 629808 main.go:141] libmachine: Launching plugin server for driver kvm2
I0317 12:42:19.928378 629808 main.go:141] libmachine: (addons-012915) Calling .DriverName
I0317 12:42:19.928551 629808 out.go:177] - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
I0317 12:42:19.929156 629808 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34013
I0317 12:42:19.929480 629808 main.go:141] libmachine: (addons-012915) Calling .GetState
I0317 12:42:19.929906 629808 addons.go:435] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
I0317 12:42:19.929927 629808 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
I0317 12:42:19.929951 629808 main.go:141] libmachine: (addons-012915) Calling .GetSSHHostname
I0317 12:42:19.930014 629808 out.go:177] - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
I0317 12:42:19.930050 629808 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0317 12:42:19.930084 629808 main.go:141] libmachine: Launching plugin server for driver kvm2
I0317 12:42:19.930707 629808 main.go:141] libmachine: () Calling .GetVersion
I0317 12:42:19.930815 629808 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39621
I0317 12:42:19.931663 629808 main.go:141] libmachine: Using API Version 1
I0317 12:42:19.931682 629808 main.go:141] libmachine: () Calling .SetConfigRaw
I0317 12:42:19.931752 629808 main.go:141] libmachine: (addons-012915) Calling .DriverName
I0317 12:42:19.932030 629808 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
I0317 12:42:19.932048 629808 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
I0317 12:42:19.932075 629808 main.go:141] libmachine: (addons-012915) Calling .GetSSHHostname
I0317 12:42:19.932524 629808 main.go:141] libmachine: () Calling .GetMachineName
I0317 12:42:19.933463 629808 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32891
I0317 12:42:19.933523 629808 out.go:177] - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.30
I0317 12:42:19.934938 629808 addons.go:435] installing /etc/kubernetes/addons/deployment.yaml
I0317 12:42:19.934957 629808 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
I0317 12:42:19.934975 629808 main.go:141] libmachine: (addons-012915) Calling .GetSSHHostname
I0317 12:42:19.935054 629808 main.go:141] libmachine: (addons-012915) DBG | domain addons-012915 has defined MAC address 52:54:00:2b:05:f6 in network mk-addons-012915
I0317 12:42:19.935837 629808 main.go:141] libmachine: (addons-012915) Calling .GetSSHPort
I0317 12:42:19.935917 629808 main.go:141] libmachine: (addons-012915) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:05:f6", ip: ""} in network mk-addons-012915: {Iface:virbr1 ExpiryTime:2025-03-17 13:41:50 +0000 UTC Type:0 Mac:52:54:00:2b:05:f6 Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:addons-012915 Clientid:01:52:54:00:2b:05:f6}
I0317 12:42:19.935936 629808 main.go:141] libmachine: (addons-012915) DBG | domain addons-012915 has defined IP address 192.168.39.84 and MAC address 52:54:00:2b:05:f6 in network mk-addons-012915
I0317 12:42:19.936112 629808 main.go:141] libmachine: (addons-012915) Calling .GetSSHKeyPath
I0317 12:42:19.936327 629808 main.go:141] libmachine: (addons-012915) DBG | domain addons-012915 has defined MAC address 52:54:00:2b:05:f6 in network mk-addons-012915
I0317 12:42:19.936369 629808 main.go:141] libmachine: (addons-012915) Calling .GetSSHUsername
I0317 12:42:19.936548 629808 sshutil.go:53] new ssh client: &{IP:192.168.39.84 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20539-621978/.minikube/machines/addons-012915/id_rsa Username:docker}
I0317 12:42:19.938207 629808 main.go:141] libmachine: (addons-012915) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:05:f6", ip: ""} in network mk-addons-012915: {Iface:virbr1 ExpiryTime:2025-03-17 13:41:50 +0000 UTC Type:0 Mac:52:54:00:2b:05:f6 Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:addons-012915 Clientid:01:52:54:00:2b:05:f6}
I0317 12:42:19.938230 629808 main.go:141] libmachine: (addons-012915) DBG | domain addons-012915 has defined IP address 192.168.39.84 and MAC address 52:54:00:2b:05:f6 in network mk-addons-012915
I0317 12:42:19.938699 629808 main.go:141] libmachine: (addons-012915) Calling .GetSSHPort
I0317 12:42:19.938885 629808 main.go:141] libmachine: (addons-012915) Calling .GetSSHKeyPath
I0317 12:42:19.939030 629808 main.go:141] libmachine: (addons-012915) Calling .GetSSHUsername
I0317 12:42:19.939163 629808 sshutil.go:53] new ssh client: &{IP:192.168.39.84 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20539-621978/.minikube/machines/addons-012915/id_rsa Username:docker}
I0317 12:42:19.939214 629808 main.go:141] libmachine: (addons-012915) DBG | domain addons-012915 has defined MAC address 52:54:00:2b:05:f6 in network mk-addons-012915
I0317 12:42:19.939621 629808 main.go:141] libmachine: (addons-012915) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:05:f6", ip: ""} in network mk-addons-012915: {Iface:virbr1 ExpiryTime:2025-03-17 13:41:50 +0000 UTC Type:0 Mac:52:54:00:2b:05:f6 Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:addons-012915 Clientid:01:52:54:00:2b:05:f6}
I0317 12:42:19.939644 629808 main.go:141] libmachine: (addons-012915) DBG | domain addons-012915 has defined IP address 192.168.39.84 and MAC address 52:54:00:2b:05:f6 in network mk-addons-012915
I0317 12:42:19.939671 629808 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37871
I0317 12:42:19.943034 629808 main.go:141] libmachine: (addons-012915) Calling .GetSSHPort
I0317 12:42:19.943108 629808 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44585
I0317 12:42:19.943614 629808 main.go:141] libmachine: (addons-012915) Calling .GetSSHKeyPath
I0317 12:42:19.943675 629808 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39265
I0317 12:42:19.943829 629808 main.go:141] libmachine: (addons-012915) Calling .GetSSHUsername
I0317 12:42:19.944001 629808 sshutil.go:53] new ssh client: &{IP:192.168.39.84 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20539-621978/.minikube/machines/addons-012915/id_rsa Username:docker}
I0317 12:42:19.944052 629808 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39223
I0317 12:42:19.945245 629808 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34539
I0317 12:42:19.955871 629808 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46583
I0317 12:42:19.955882 629808 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43817
I0317 12:42:19.955872 629808 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33183
I0317 12:42:19.956072 629808 main.go:141] libmachine: () Calling .GetVersion
I0317 12:42:19.956384 629808 main.go:141] libmachine: () Calling .GetVersion
I0317 12:42:19.956450 629808 main.go:141] libmachine: () Calling .GetVersion
I0317 12:42:19.956881 629808 main.go:141] libmachine: Using API Version 1
I0317 12:42:19.956895 629808 main.go:141] libmachine: () Calling .SetConfigRaw
I0317 12:42:19.956930 629808 main.go:141] libmachine: Using API Version 1
I0317 12:42:19.956949 629808 main.go:141] libmachine: () Calling .SetConfigRaw
I0317 12:42:19.956953 629808 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0317 12:42:19.956989 629808 main.go:141] libmachine: Launching plugin server for driver kvm2
I0317 12:42:19.957447 629808 main.go:141] libmachine: () Calling .GetMachineName
I0317 12:42:19.957596 629808 main.go:141] libmachine: () Calling .GetVersion
I0317 12:42:19.957680 629808 main.go:141] libmachine: () Calling .GetMachineName
I0317 12:42:19.958049 629808 main.go:141] libmachine: () Calling .GetVersion
I0317 12:42:19.958181 629808 main.go:141] libmachine: (addons-012915) Calling .GetState
I0317 12:42:19.958235 629808 main.go:141] libmachine: () Calling .GetVersion
I0317 12:42:19.958304 629808 main.go:141] libmachine: () Calling .GetVersion
I0317 12:42:19.958362 629808 main.go:141] libmachine: () Calling .GetVersion
I0317 12:42:19.958482 629808 main.go:141] libmachine: Using API Version 1
I0317 12:42:19.958491 629808 main.go:141] libmachine: () Calling .SetConfigRaw
I0317 12:42:19.958540 629808 main.go:141] libmachine: (addons-012915) Calling .GetState
I0317 12:42:19.958590 629808 main.go:141] libmachine: () Calling .GetVersion
I0317 12:42:19.958647 629808 main.go:141] libmachine: () Calling .GetVersion
I0317 12:42:19.958763 629808 main.go:141] libmachine: Using API Version 1
I0317 12:42:19.958781 629808 main.go:141] libmachine: () Calling .SetConfigRaw
I0317 12:42:19.958837 629808 main.go:141] libmachine: () Calling .GetMachineName
I0317 12:42:19.959116 629808 main.go:141] libmachine: () Calling .GetMachineName
I0317 12:42:19.959252 629808 main.go:141] libmachine: Using API Version 1
I0317 12:42:19.959273 629808 main.go:141] libmachine: () Calling .SetConfigRaw
I0317 12:42:19.959396 629808 main.go:141] libmachine: Using API Version 1
I0317 12:42:19.959411 629808 main.go:141] libmachine: () Calling .SetConfigRaw
I0317 12:42:19.959545 629808 main.go:141] libmachine: Using API Version 1
I0317 12:42:19.959557 629808 main.go:141] libmachine: () Calling .SetConfigRaw
I0317 12:42:19.959613 629808 main.go:141] libmachine: (addons-012915) Calling .GetState
I0317 12:42:19.959836 629808 main.go:141] libmachine: Using API Version 1
I0317 12:42:19.959855 629808 main.go:141] libmachine: () Calling .SetConfigRaw
I0317 12:42:19.959989 629808 main.go:141] libmachine: Using API Version 1
I0317 12:42:19.960002 629808 main.go:141] libmachine: () Calling .SetConfigRaw
I0317 12:42:19.960066 629808 main.go:141] libmachine: (addons-012915) Calling .DriverName
I0317 12:42:19.960113 629808 main.go:141] libmachine: () Calling .GetMachineName
I0317 12:42:19.960588 629808 main.go:141] libmachine: () Calling .GetMachineName
I0317 12:42:19.960653 629808 main.go:141] libmachine: () Calling .GetMachineName
I0317 12:42:19.960691 629808 main.go:141] libmachine: () Calling .GetMachineName
I0317 12:42:19.961414 629808 main.go:141] libmachine: () Calling .GetMachineName
I0317 12:42:19.961509 629808 main.go:141] libmachine: Using API Version 1
I0317 12:42:19.961528 629808 main.go:141] libmachine: () Calling .SetConfigRaw
I0317 12:42:19.961975 629808 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0317 12:42:19.962019 629808 main.go:141] libmachine: Launching plugin server for driver kvm2
I0317 12:42:19.962917 629808 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0317 12:42:19.962954 629808 main.go:141] libmachine: Launching plugin server for driver kvm2
I0317 12:42:19.962960 629808 main.go:141] libmachine: (addons-012915) Calling .GetState
I0317 12:42:19.963099 629808 main.go:141] libmachine: (addons-012915) Calling .DriverName
I0317 12:42:19.963168 629808 main.go:141] libmachine: (addons-012915) Calling .DriverName
I0317 12:42:19.963211 629808 main.go:141] libmachine: (addons-012915) Calling .DriverName
I0317 12:42:19.963487 629808 main.go:141] libmachine: Making call to close driver server
I0317 12:42:19.963502 629808 main.go:141] libmachine: (addons-012915) Calling .Close
I0317 12:42:19.963821 629808 main.go:141] libmachine: Successfully made call to close driver server
I0317 12:42:19.963840 629808 main.go:141] libmachine: Making call to close connection to plugin binary
I0317 12:42:19.963850 629808 main.go:141] libmachine: Making call to close driver server
I0317 12:42:19.963857 629808 main.go:141] libmachine: (addons-012915) Calling .Close
I0317 12:42:19.964272 629808 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0317 12:42:19.964308 629808 main.go:141] libmachine: Launching plugin server for driver kvm2
I0317 12:42:19.965700 629808 main.go:141] libmachine: () Calling .GetMachineName
I0317 12:42:19.966017 629808 main.go:141] libmachine: (addons-012915) Calling .GetState
I0317 12:42:19.966028 629808 main.go:141] libmachine: (addons-012915) Calling .GetState
I0317 12:42:19.966072 629808 main.go:141] libmachine: (addons-012915) DBG | Closing plugin on server side
I0317 12:42:19.966727 629808 main.go:141] libmachine: Successfully made call to close driver server
I0317 12:42:19.966744 629808 main.go:141] libmachine: Making call to close connection to plugin binary
I0317 12:42:19.966755 629808 main.go:141] libmachine: (addons-012915) Calling .DriverName
W0317 12:42:19.966832 629808 out.go:270] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
I0317 12:42:19.967953 629808 main.go:141] libmachine: (addons-012915) Calling .DriverName
I0317 12:42:19.968312 629808 out.go:177] - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
I0317 12:42:19.968333 629808 out.go:177] - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
I0317 12:42:19.968312 629808 out.go:177] - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.37.0
I0317 12:42:19.968711 629808 main.go:141] libmachine: (addons-012915) Calling .DriverName
I0317 12:42:19.969543 629808 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
I0317 12:42:19.969570 629808 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
I0317 12:42:19.969591 629808 main.go:141] libmachine: (addons-012915) Calling .GetSSHHostname
I0317 12:42:19.969547 629808 out.go:177] - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
I0317 12:42:19.970459 629808 addons.go:435] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
I0317 12:42:19.970475 629808 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
I0317 12:42:19.970493 629808 main.go:141] libmachine: (addons-012915) Calling .GetSSHHostname
I0317 12:42:19.971403 629808 out.go:177] - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
I0317 12:42:19.971545 629808 addons.go:435] installing /etc/kubernetes/addons/ig-crd.yaml
I0317 12:42:19.971563 629808 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5248 bytes)
I0317 12:42:19.971596 629808 main.go:141] libmachine: (addons-012915) Calling .GetSSHHostname
I0317 12:42:19.974165 629808 out.go:177] - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
I0317 12:42:19.975082 629808 main.go:141] libmachine: (addons-012915) DBG | domain addons-012915 has defined MAC address 52:54:00:2b:05:f6 in network mk-addons-012915
I0317 12:42:19.975155 629808 main.go:141] libmachine: (addons-012915) DBG | domain addons-012915 has defined MAC address 52:54:00:2b:05:f6 in network mk-addons-012915
I0317 12:42:19.975322 629808 out.go:177] - Using image registry.k8s.io/ingress-nginx/controller:v1.11.3
I0317 12:42:19.975701 629808 main.go:141] libmachine: (addons-012915) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:05:f6", ip: ""} in network mk-addons-012915: {Iface:virbr1 ExpiryTime:2025-03-17 13:41:50 +0000 UTC Type:0 Mac:52:54:00:2b:05:f6 Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:addons-012915 Clientid:01:52:54:00:2b:05:f6}
I0317 12:42:19.975725 629808 main.go:141] libmachine: (addons-012915) DBG | domain addons-012915 has defined IP address 192.168.39.84 and MAC address 52:54:00:2b:05:f6 in network mk-addons-012915
I0317 12:42:19.976486 629808 main.go:141] libmachine: (addons-012915) DBG | domain addons-012915 has defined MAC address 52:54:00:2b:05:f6 in network mk-addons-012915
I0317 12:42:19.976531 629808 main.go:141] libmachine: (addons-012915) Calling .GetSSHPort
I0317 12:42:19.976757 629808 main.go:141] libmachine: (addons-012915) Calling .GetSSHKeyPath
I0317 12:42:19.976879 629808 main.go:141] libmachine: (addons-012915) Calling .GetSSHUsername
I0317 12:42:19.976966 629808 sshutil.go:53] new ssh client: &{IP:192.168.39.84 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20539-621978/.minikube/machines/addons-012915/id_rsa Username:docker}
I0317 12:42:19.978118 629808 main.go:141] libmachine: (addons-012915) Calling .GetSSHPort
I0317 12:42:19.978332 629808 main.go:141] libmachine: (addons-012915) Calling .GetSSHKeyPath
I0317 12:42:19.978503 629808 main.go:141] libmachine: (addons-012915) Calling .GetSSHUsername
I0317 12:42:19.978656 629808 sshutil.go:53] new ssh client: &{IP:192.168.39.84 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20539-621978/.minikube/machines/addons-012915/id_rsa Username:docker}
I0317 12:42:19.979122 629808 out.go:177] - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
I0317 12:42:19.979194 629808 main.go:141] libmachine: (addons-012915) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:05:f6", ip: ""} in network mk-addons-012915: {Iface:virbr1 ExpiryTime:2025-03-17 13:41:50 +0000 UTC Type:0 Mac:52:54:00:2b:05:f6 Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:addons-012915 Clientid:01:52:54:00:2b:05:f6}
I0317 12:42:19.979223 629808 main.go:141] libmachine: (addons-012915) DBG | domain addons-012915 has defined IP address 192.168.39.84 and MAC address 52:54:00:2b:05:f6 in network mk-addons-012915
I0317 12:42:19.979814 629808 main.go:141] libmachine: (addons-012915) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:05:f6", ip: ""} in network mk-addons-012915: {Iface:virbr1 ExpiryTime:2025-03-17 13:41:50 +0000 UTC Type:0 Mac:52:54:00:2b:05:f6 Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:addons-012915 Clientid:01:52:54:00:2b:05:f6}
I0317 12:42:19.979835 629808 main.go:141] libmachine: (addons-012915) DBG | domain addons-012915 has defined IP address 192.168.39.84 and MAC address 52:54:00:2b:05:f6 in network mk-addons-012915
I0317 12:42:19.980282 629808 main.go:141] libmachine: (addons-012915) Calling .GetSSHPort
I0317 12:42:19.980650 629808 main.go:141] libmachine: (addons-012915) Calling .GetSSHKeyPath
I0317 12:42:19.980825 629808 main.go:141] libmachine: (addons-012915) Calling .GetSSHUsername
I0317 12:42:19.980928 629808 sshutil.go:53] new ssh client: &{IP:192.168.39.84 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20539-621978/.minikube/machines/addons-012915/id_rsa Username:docker}
I0317 12:42:19.981941 629808 out.go:177] - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
I0317 12:42:19.982030 629808 out.go:177] - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
I0317 12:42:19.983757 629808 addons.go:435] installing /etc/kubernetes/addons/ingress-deploy.yaml
I0317 12:42:19.983777 629808 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
I0317 12:42:19.983795 629808 main.go:141] libmachine: (addons-012915) Calling .GetSSHHostname
I0317 12:42:19.983891 629808 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44989
I0317 12:42:19.984900 629808 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39759
I0317 12:42:19.985200 629808 out.go:177] - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
I0317 12:42:19.985580 629808 main.go:141] libmachine: () Calling .GetVersion
I0317 12:42:19.986093 629808 main.go:141] libmachine: Using API Version 1
I0317 12:42:19.986116 629808 main.go:141] libmachine: () Calling .SetConfigRaw
I0317 12:42:19.986908 629808 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37315
I0317 12:42:19.987216 629808 main.go:141] libmachine: () Calling .GetVersion
I0317 12:42:19.987552 629808 main.go:141] libmachine: () Calling .GetVersion
I0317 12:42:19.987728 629808 main.go:141] libmachine: Using API Version 1
I0317 12:42:19.987744 629808 main.go:141] libmachine: () Calling .SetConfigRaw
I0317 12:42:19.987893 629808 out.go:177] - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
I0317 12:42:19.988221 629808 main.go:141] libmachine: () Calling .GetMachineName
I0317 12:42:19.988324 629808 main.go:141] libmachine: Using API Version 1
I0317 12:42:19.988338 629808 main.go:141] libmachine: () Calling .SetConfigRaw
I0317 12:42:19.988439 629808 main.go:141] libmachine: (addons-012915) Calling .GetState
I0317 12:42:19.989250 629808 main.go:141] libmachine: (addons-012915) DBG | domain addons-012915 has defined MAC address 52:54:00:2b:05:f6 in network mk-addons-012915
I0317 12:42:19.989309 629808 main.go:141] libmachine: () Calling .GetMachineName
I0317 12:42:19.989508 629808 main.go:141] libmachine: (addons-012915) Calling .GetState
I0317 12:42:19.990069 629808 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40565
I0317 12:42:19.990422 629808 out.go:177] - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
I0317 12:42:19.990990 629808 main.go:141] libmachine: () Calling .GetVersion
I0317 12:42:19.991370 629808 main.go:141] libmachine: Using API Version 1
I0317 12:42:19.991386 629808 main.go:141] libmachine: () Calling .SetConfigRaw
I0317 12:42:19.991649 629808 main.go:141] libmachine: () Calling .GetMachineName
I0317 12:42:19.991716 629808 main.go:141] libmachine: () Calling .GetMachineName
I0317 12:42:19.991893 629808 main.go:141] libmachine: (addons-012915) Calling .GetState
I0317 12:42:19.992193 629808 main.go:141] libmachine: (addons-012915) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:05:f6", ip: ""} in network mk-addons-012915: {Iface:virbr1 ExpiryTime:2025-03-17 13:41:50 +0000 UTC Type:0 Mac:52:54:00:2b:05:f6 Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:addons-012915 Clientid:01:52:54:00:2b:05:f6}
I0317 12:42:19.992212 629808 main.go:141] libmachine: (addons-012915) DBG | domain addons-012915 has defined IP address 192.168.39.84 and MAC address 52:54:00:2b:05:f6 in network mk-addons-012915
I0317 12:42:19.992419 629808 main.go:141] libmachine: (addons-012915) Calling .GetSSHPort
I0317 12:42:19.992604 629808 main.go:141] libmachine: (addons-012915) Calling .GetSSHKeyPath
I0317 12:42:19.992801 629808 main.go:141] libmachine: (addons-012915) Calling .DriverName
I0317 12:42:19.992870 629808 main.go:141] libmachine: (addons-012915) Calling .GetSSHUsername
I0317 12:42:19.992883 629808 out.go:177] - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
I0317 12:42:19.993058 629808 sshutil.go:53] new ssh client: &{IP:192.168.39.84 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20539-621978/.minikube/machines/addons-012915/id_rsa Username:docker}
I0317 12:42:19.993343 629808 main.go:141] libmachine: (addons-012915) Calling .GetState
I0317 12:42:19.993481 629808 main.go:141] libmachine: (addons-012915) Calling .DriverName
I0317 12:42:19.994094 629808 main.go:141] libmachine: (addons-012915) Calling .DriverName
I0317 12:42:19.994165 629808 addons.go:435] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
I0317 12:42:19.994180 629808 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
I0317 12:42:19.994194 629808 main.go:141] libmachine: (addons-012915) Calling .GetSSHHostname
I0317 12:42:19.994825 629808 out.go:177] - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.8
I0317 12:42:19.994841 629808 out.go:177] - Using image docker.io/marcnuri/yakd:0.0.5
I0317 12:42:19.995338 629808 main.go:141] libmachine: (addons-012915) Calling .DriverName
I0317 12:42:19.995483 629808 out.go:177] - Using image nvcr.io/nvidia/k8s-device-plugin:v0.17.0
I0317 12:42:19.996229 629808 addons.go:435] installing /etc/kubernetes/addons/yakd-ns.yaml
I0317 12:42:19.996258 629808 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
I0317 12:42:19.996278 629808 main.go:141] libmachine: (addons-012915) Calling .GetSSHHostname
I0317 12:42:19.996907 629808 out.go:177] - Using image gcr.io/k8s-minikube/storage-provisioner:v5
I0317 12:42:19.996991 629808 addons.go:435] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
I0317 12:42:19.997014 629808 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
I0317 12:42:19.997036 629808 main.go:141] libmachine: (addons-012915) Calling .GetSSHHostname
I0317 12:42:19.997086 629808 main.go:141] libmachine: (addons-012915) DBG | domain addons-012915 has defined MAC address 52:54:00:2b:05:f6 in network mk-addons-012915
I0317 12:42:19.997546 629808 main.go:141] libmachine: (addons-012915) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:05:f6", ip: ""} in network mk-addons-012915: {Iface:virbr1 ExpiryTime:2025-03-17 13:41:50 +0000 UTC Type:0 Mac:52:54:00:2b:05:f6 Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:addons-012915 Clientid:01:52:54:00:2b:05:f6}
I0317 12:42:19.997580 629808 main.go:141] libmachine: (addons-012915) DBG | domain addons-012915 has defined IP address 192.168.39.84 and MAC address 52:54:00:2b:05:f6 in network mk-addons-012915
I0317 12:42:19.997816 629808 main.go:141] libmachine: (addons-012915) Calling .GetSSHPort
I0317 12:42:19.997996 629808 main.go:141] libmachine: (addons-012915) Calling .GetSSHKeyPath
I0317 12:42:19.998061 629808 out.go:177] - Using image docker.io/registry:2.8.3
I0317 12:42:19.998118 629808 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
I0317 12:42:19.998137 629808 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
I0317 12:42:19.998152 629808 main.go:141] libmachine: (addons-012915) Calling .GetSSHHostname
I0317 12:42:19.998306 629808 main.go:141] libmachine: (addons-012915) Calling .GetSSHUsername
I0317 12:42:19.998475 629808 sshutil.go:53] new ssh client: &{IP:192.168.39.84 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20539-621978/.minikube/machines/addons-012915/id_rsa Username:docker}
I0317 12:42:19.999203 629808 addons.go:435] installing /etc/kubernetes/addons/registry-rc.yaml
I0317 12:42:19.999219 629808 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
I0317 12:42:19.999235 629808 main.go:141] libmachine: (addons-012915) Calling .GetSSHHostname
I0317 12:42:20.000388 629808 main.go:141] libmachine: (addons-012915) DBG | domain addons-012915 has defined MAC address 52:54:00:2b:05:f6 in network mk-addons-012915
I0317 12:42:20.001160 629808 main.go:141] libmachine: (addons-012915) Calling .GetSSHPort
I0317 12:42:20.001210 629808 main.go:141] libmachine: (addons-012915) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:05:f6", ip: ""} in network mk-addons-012915: {Iface:virbr1 ExpiryTime:2025-03-17 13:41:50 +0000 UTC Type:0 Mac:52:54:00:2b:05:f6 Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:addons-012915 Clientid:01:52:54:00:2b:05:f6}
I0317 12:42:20.001233 629808 main.go:141] libmachine: (addons-012915) DBG | domain addons-012915 has defined IP address 192.168.39.84 and MAC address 52:54:00:2b:05:f6 in network mk-addons-012915
I0317 12:42:20.001900 629808 main.go:141] libmachine: (addons-012915) Calling .GetSSHKeyPath
I0317 12:42:20.001905 629808 main.go:141] libmachine: (addons-012915) DBG | domain addons-012915 has defined MAC address 52:54:00:2b:05:f6 in network mk-addons-012915
I0317 12:42:20.002160 629808 main.go:141] libmachine: (addons-012915) Calling .GetSSHUsername
I0317 12:42:20.002517 629808 sshutil.go:53] new ssh client: &{IP:192.168.39.84 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20539-621978/.minikube/machines/addons-012915/id_rsa Username:docker}
I0317 12:42:20.002624 629808 main.go:141] libmachine: (addons-012915) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:05:f6", ip: ""} in network mk-addons-012915: {Iface:virbr1 ExpiryTime:2025-03-17 13:41:50 +0000 UTC Type:0 Mac:52:54:00:2b:05:f6 Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:addons-012915 Clientid:01:52:54:00:2b:05:f6}
I0317 12:42:20.002654 629808 main.go:141] libmachine: (addons-012915) DBG | domain addons-012915 has defined IP address 192.168.39.84 and MAC address 52:54:00:2b:05:f6 in network mk-addons-012915
I0317 12:42:20.003060 629808 main.go:141] libmachine: (addons-012915) Calling .GetSSHPort
I0317 12:42:20.003254 629808 main.go:141] libmachine: (addons-012915) Calling .GetSSHKeyPath
I0317 12:42:20.003284 629808 main.go:141] libmachine: (addons-012915) DBG | domain addons-012915 has defined MAC address 52:54:00:2b:05:f6 in network mk-addons-012915
I0317 12:42:20.003314 629808 main.go:141] libmachine: (addons-012915) DBG | domain addons-012915 has defined MAC address 52:54:00:2b:05:f6 in network mk-addons-012915
I0317 12:42:20.003483 629808 main.go:141] libmachine: (addons-012915) Calling .GetSSHUsername
I0317 12:42:20.003671 629808 sshutil.go:53] new ssh client: &{IP:192.168.39.84 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20539-621978/.minikube/machines/addons-012915/id_rsa Username:docker}
I0317 12:42:20.004058 629808 main.go:141] libmachine: (addons-012915) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:05:f6", ip: ""} in network mk-addons-012915: {Iface:virbr1 ExpiryTime:2025-03-17 13:41:50 +0000 UTC Type:0 Mac:52:54:00:2b:05:f6 Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:addons-012915 Clientid:01:52:54:00:2b:05:f6}
I0317 12:42:20.004081 629808 main.go:141] libmachine: (addons-012915) DBG | domain addons-012915 has defined IP address 192.168.39.84 and MAC address 52:54:00:2b:05:f6 in network mk-addons-012915
I0317 12:42:20.004064 629808 main.go:141] libmachine: (addons-012915) Calling .GetSSHPort
I0317 12:42:20.004084 629808 main.go:141] libmachine: (addons-012915) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:05:f6", ip: ""} in network mk-addons-012915: {Iface:virbr1 ExpiryTime:2025-03-17 13:41:50 +0000 UTC Type:0 Mac:52:54:00:2b:05:f6 Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:addons-012915 Clientid:01:52:54:00:2b:05:f6}
I0317 12:42:20.004102 629808 main.go:141] libmachine: (addons-012915) DBG | domain addons-012915 has defined IP address 192.168.39.84 and MAC address 52:54:00:2b:05:f6 in network mk-addons-012915
I0317 12:42:20.004147 629808 main.go:141] libmachine: (addons-012915) Calling .GetSSHPort
I0317 12:42:20.004251 629808 main.go:141] libmachine: (addons-012915) Calling .GetSSHKeyPath
I0317 12:42:20.004290 629808 main.go:141] libmachine: (addons-012915) Calling .GetSSHKeyPath
I0317 12:42:20.004452 629808 main.go:141] libmachine: (addons-012915) Calling .GetSSHUsername
I0317 12:42:20.004493 629808 main.go:141] libmachine: (addons-012915) Calling .GetSSHUsername
I0317 12:42:20.004579 629808 sshutil.go:53] new ssh client: &{IP:192.168.39.84 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20539-621978/.minikube/machines/addons-012915/id_rsa Username:docker}
I0317 12:42:20.004659 629808 sshutil.go:53] new ssh client: &{IP:192.168.39.84 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20539-621978/.minikube/machines/addons-012915/id_rsa Username:docker}
I0317 12:42:20.007324 629808 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32893
I0317 12:42:20.007754 629808 main.go:141] libmachine: () Calling .GetVersion
I0317 12:42:20.008292 629808 main.go:141] libmachine: Using API Version 1
I0317 12:42:20.008310 629808 main.go:141] libmachine: () Calling .SetConfigRaw
I0317 12:42:20.008690 629808 main.go:141] libmachine: () Calling .GetMachineName
I0317 12:42:20.008929 629808 main.go:141] libmachine: (addons-012915) Calling .GetState
I0317 12:42:20.009476 629808 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41821
I0317 12:42:20.009916 629808 main.go:141] libmachine: () Calling .GetVersion
I0317 12:42:20.010504 629808 main.go:141] libmachine: Using API Version 1
I0317 12:42:20.010522 629808 main.go:141] libmachine: () Calling .SetConfigRaw
I0317 12:42:20.010592 629808 main.go:141] libmachine: (addons-012915) Calling .DriverName
I0317 12:42:20.010967 629808 main.go:141] libmachine: () Calling .GetMachineName
I0317 12:42:20.011152 629808 main.go:141] libmachine: (addons-012915) Calling .GetState
I0317 12:42:20.012140 629808 out.go:177] - Using image docker.io/busybox:stable
I0317 12:42:20.012350 629808 main.go:141] libmachine: (addons-012915) Calling .DriverName
I0317 12:42:20.012680 629808 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
I0317 12:42:20.012695 629808 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
I0317 12:42:20.012715 629808 main.go:141] libmachine: (addons-012915) Calling .GetSSHHostname
I0317 12:42:20.014335 629808 out.go:177] - Using image docker.io/rancher/local-path-provisioner:v0.0.22
I0317 12:42:20.015952 629808 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
I0317 12:42:20.015969 629808 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
I0317 12:42:20.015986 629808 main.go:141] libmachine: (addons-012915) Calling .GetSSHHostname
I0317 12:42:20.016330 629808 main.go:141] libmachine: (addons-012915) DBG | domain addons-012915 has defined MAC address 52:54:00:2b:05:f6 in network mk-addons-012915
I0317 12:42:20.016586 629808 main.go:141] libmachine: (addons-012915) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:05:f6", ip: ""} in network mk-addons-012915: {Iface:virbr1 ExpiryTime:2025-03-17 13:41:50 +0000 UTC Type:0 Mac:52:54:00:2b:05:f6 Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:addons-012915 Clientid:01:52:54:00:2b:05:f6}
I0317 12:42:20.016604 629808 main.go:141] libmachine: (addons-012915) DBG | domain addons-012915 has defined IP address 192.168.39.84 and MAC address 52:54:00:2b:05:f6 in network mk-addons-012915
I0317 12:42:20.016848 629808 main.go:141] libmachine: (addons-012915) Calling .GetSSHPort
I0317 12:42:20.017047 629808 main.go:141] libmachine: (addons-012915) Calling .GetSSHKeyPath
I0317 12:42:20.017287 629808 main.go:141] libmachine: (addons-012915) Calling .GetSSHUsername
I0317 12:42:20.017471 629808 sshutil.go:53] new ssh client: &{IP:192.168.39.84 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20539-621978/.minikube/machines/addons-012915/id_rsa Username:docker}
I0317 12:42:20.018671 629808 main.go:141] libmachine: (addons-012915) DBG | domain addons-012915 has defined MAC address 52:54:00:2b:05:f6 in network mk-addons-012915
I0317 12:42:20.018984 629808 main.go:141] libmachine: (addons-012915) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:05:f6", ip: ""} in network mk-addons-012915: {Iface:virbr1 ExpiryTime:2025-03-17 13:41:50 +0000 UTC Type:0 Mac:52:54:00:2b:05:f6 Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:addons-012915 Clientid:01:52:54:00:2b:05:f6}
I0317 12:42:20.019006 629808 main.go:141] libmachine: (addons-012915) DBG | domain addons-012915 has defined IP address 192.168.39.84 and MAC address 52:54:00:2b:05:f6 in network mk-addons-012915
I0317 12:42:20.019176 629808 main.go:141] libmachine: (addons-012915) Calling .GetSSHPort
I0317 12:42:20.019322 629808 main.go:141] libmachine: (addons-012915) Calling .GetSSHKeyPath
I0317 12:42:20.019412 629808 main.go:141] libmachine: (addons-012915) Calling .GetSSHUsername
I0317 12:42:20.019490 629808 sshutil.go:53] new ssh client: &{IP:192.168.39.84 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20539-621978/.minikube/machines/addons-012915/id_rsa Username:docker}
I0317 12:42:20.297248 629808 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
I0317 12:42:20.297619 629808 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
I0317 12:42:20.297640 629808 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
I0317 12:42:20.312103 629808 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
I0317 12:42:20.379295 629808 ssh_runner.go:195] Run: sudo systemctl start kubelet
I0317 12:42:20.382301 629808 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^ forward . \/etc\/resolv.conf.*/i \ hosts {\n 192.168.39.1 host.minikube.internal\n fallthrough\n }' -e '/^ errors *$/i \ log' | sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
I0317 12:42:20.414819 629808 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
I0317 12:42:20.414841 629808 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
I0317 12:42:20.415472 629808 addons.go:435] installing /etc/kubernetes/addons/rbac-hostpath.yaml
I0317 12:42:20.415497 629808 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
I0317 12:42:20.416541 629808 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
I0317 12:42:20.491901 629808 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
I0317 12:42:20.498734 629808 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
I0317 12:42:20.508832 629808 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
I0317 12:42:20.508860 629808 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
I0317 12:42:20.513871 629808 addons.go:435] installing /etc/kubernetes/addons/ig-deployment.yaml
I0317 12:42:20.513888 629808 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (14539 bytes)
I0317 12:42:20.546612 629808 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
I0317 12:42:20.546648 629808 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
I0317 12:42:20.565431 629808 addons.go:435] installing /etc/kubernetes/addons/registry-svc.yaml
I0317 12:42:20.565460 629808 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
I0317 12:42:20.596812 629808 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
I0317 12:42:20.630889 629808 addons.go:435] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
I0317 12:42:20.630926 629808 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
I0317 12:42:20.636623 629808 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
I0317 12:42:20.640045 629808 addons.go:435] installing /etc/kubernetes/addons/yakd-sa.yaml
I0317 12:42:20.640066 629808 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
I0317 12:42:20.641874 629808 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
I0317 12:42:20.701791 629808 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
I0317 12:42:20.743417 629808 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
I0317 12:42:20.743455 629808 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
I0317 12:42:20.771844 629808 addons.go:435] installing /etc/kubernetes/addons/registry-proxy.yaml
I0317 12:42:20.771878 629808 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
I0317 12:42:20.788611 629808 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
I0317 12:42:20.870858 629808 addons.go:435] installing /etc/kubernetes/addons/yakd-crb.yaml
I0317 12:42:20.870897 629808 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
I0317 12:42:20.912079 629808 addons.go:435] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
I0317 12:42:20.912114 629808 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
I0317 12:42:20.987265 629808 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
I0317 12:42:20.987295 629808 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
I0317 12:42:21.065518 629808 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
I0317 12:42:21.117210 629808 addons.go:435] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
I0317 12:42:21.117257 629808 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
I0317 12:42:21.130122 629808 addons.go:435] installing /etc/kubernetes/addons/yakd-svc.yaml
I0317 12:42:21.130158 629808 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
I0317 12:42:21.190751 629808 addons.go:435] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
I0317 12:42:21.190786 629808 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
I0317 12:42:21.282504 629808 addons.go:435] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
I0317 12:42:21.282538 629808 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
I0317 12:42:21.294555 629808 addons.go:435] installing /etc/kubernetes/addons/yakd-dp.yaml
I0317 12:42:21.294585 629808 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
I0317 12:42:21.424580 629808 addons.go:435] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
I0317 12:42:21.424614 629808 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
I0317 12:42:21.493943 629808 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
I0317 12:42:21.493972 629808 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
I0317 12:42:21.568812 629808 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
I0317 12:42:21.666158 629808 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
I0317 12:42:21.728233 629808 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
I0317 12:42:21.728263 629808 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
I0317 12:42:22.095704 629808 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
I0317 12:42:22.095729 629808 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
I0317 12:42:22.305434 629808 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
I0317 12:42:22.305459 629808 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
I0317 12:42:22.622253 629808 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
I0317 12:42:22.622280 629808 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
I0317 12:42:22.855937 629808 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
I0317 12:42:23.365879 629808 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (3.068590459s)
I0317 12:42:23.365936 629808 main.go:141] libmachine: Making call to close driver server
I0317 12:42:23.365938 629808 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (3.053794114s)
I0317 12:42:23.365948 629808 main.go:141] libmachine: (addons-012915) Calling .Close
I0317 12:42:23.365991 629808 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (2.986673397s)
I0317 12:42:23.365985 629808 main.go:141] libmachine: Making call to close driver server
I0317 12:42:23.366044 629808 main.go:141] libmachine: (addons-012915) Calling .Close
I0317 12:42:23.366075 629808 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (2.94950966s)
I0317 12:42:23.366103 629808 main.go:141] libmachine: Making call to close driver server
I0317 12:42:23.366112 629808 main.go:141] libmachine: (addons-012915) Calling .Close
I0317 12:42:23.366045 629808 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^ forward . \/etc\/resolv.conf.*/i \ hosts {\n 192.168.39.1 host.minikube.internal\n fallthrough\n }' -e '/^ errors *$/i \ log' | sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.983720222s)
I0317 12:42:23.366133 629808 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
I0317 12:42:23.366493 629808 main.go:141] libmachine: Successfully made call to close driver server
I0317 12:42:23.366514 629808 main.go:141] libmachine: Making call to close connection to plugin binary
I0317 12:42:23.366524 629808 main.go:141] libmachine: Making call to close driver server
I0317 12:42:23.366535 629808 main.go:141] libmachine: (addons-012915) Calling .Close
I0317 12:42:23.366741 629808 main.go:141] libmachine: Successfully made call to close driver server
I0317 12:42:23.366781 629808 main.go:141] libmachine: Making call to close connection to plugin binary
I0317 12:42:23.366805 629808 main.go:141] libmachine: Making call to close driver server
I0317 12:42:23.366822 629808 main.go:141] libmachine: (addons-012915) Calling .Close
I0317 12:42:23.367381 629808 main.go:141] libmachine: (addons-012915) DBG | Closing plugin on server side
I0317 12:42:23.367392 629808 main.go:141] libmachine: Successfully made call to close driver server
I0317 12:42:23.367406 629808 main.go:141] libmachine: Making call to close connection to plugin binary
I0317 12:42:23.367433 629808 main.go:141] libmachine: Successfully made call to close driver server
I0317 12:42:23.367444 629808 main.go:141] libmachine: Making call to close connection to plugin binary
I0317 12:42:23.367452 629808 main.go:141] libmachine: Making call to close driver server
I0317 12:42:23.367458 629808 main.go:141] libmachine: (addons-012915) Calling .Close
I0317 12:42:23.367553 629808 main.go:141] libmachine: (addons-012915) DBG | Closing plugin on server side
I0317 12:42:23.367580 629808 main.go:141] libmachine: Successfully made call to close driver server
I0317 12:42:23.367586 629808 main.go:141] libmachine: Making call to close connection to plugin binary
I0317 12:42:23.367609 629808 node_ready.go:35] waiting up to 6m0s for node "addons-012915" to be "Ready" ...
I0317 12:42:23.367781 629808 main.go:141] libmachine: (addons-012915) DBG | Closing plugin on server side
I0317 12:42:23.367808 629808 main.go:141] libmachine: Successfully made call to close driver server
I0317 12:42:23.367814 629808 main.go:141] libmachine: Making call to close connection to plugin binary
I0317 12:42:23.367406 629808 main.go:141] libmachine: (addons-012915) DBG | Closing plugin on server side
I0317 12:42:23.382262 629808 node_ready.go:49] node "addons-012915" has status "Ready":"True"
I0317 12:42:23.382287 629808 node_ready.go:38] duration metric: took 14.660522ms for node "addons-012915" to be "Ready" ...
I0317 12:42:23.382297 629808 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0317 12:42:23.525554 629808 pod_ready.go:79] waiting up to 6m0s for pod "amd-gpu-device-plugin-5pkbv" in "kube-system" namespace to be "Ready" ...
I0317 12:42:23.966938 629808 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-012915" context rescaled to 1 replicas
I0317 12:42:25.574348 629808 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (5.08239362s)
I0317 12:42:25.574410 629808 main.go:141] libmachine: Making call to close driver server
I0317 12:42:25.574424 629808 main.go:141] libmachine: (addons-012915) Calling .Close
I0317 12:42:25.574418 629808 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (5.075649671s)
I0317 12:42:25.574456 629808 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (4.977607547s)
I0317 12:42:25.574465 629808 main.go:141] libmachine: Making call to close driver server
I0317 12:42:25.574478 629808 main.go:141] libmachine: (addons-012915) Calling .Close
I0317 12:42:25.574483 629808 main.go:141] libmachine: Making call to close driver server
I0317 12:42:25.574499 629808 main.go:141] libmachine: (addons-012915) Calling .Close
I0317 12:42:25.574747 629808 main.go:141] libmachine: Successfully made call to close driver server
I0317 12:42:25.574770 629808 main.go:141] libmachine: Making call to close connection to plugin binary
I0317 12:42:25.574781 629808 main.go:141] libmachine: Making call to close driver server
I0317 12:42:25.574789 629808 main.go:141] libmachine: (addons-012915) Calling .Close
I0317 12:42:25.574853 629808 main.go:141] libmachine: Successfully made call to close driver server
I0317 12:42:25.574873 629808 main.go:141] libmachine: Making call to close connection to plugin binary
I0317 12:42:25.574884 629808 main.go:141] libmachine: Making call to close driver server
I0317 12:42:25.574882 629808 main.go:141] libmachine: (addons-012915) DBG | Closing plugin on server side
I0317 12:42:25.574892 629808 main.go:141] libmachine: (addons-012915) Calling .Close
I0317 12:42:25.574893 629808 main.go:141] libmachine: (addons-012915) DBG | Closing plugin on server side
I0317 12:42:25.574966 629808 main.go:141] libmachine: (addons-012915) DBG | Closing plugin on server side
I0317 12:42:25.574991 629808 main.go:141] libmachine: Successfully made call to close driver server
I0317 12:42:25.574998 629808 main.go:141] libmachine: Making call to close connection to plugin binary
I0317 12:42:25.575584 629808 main.go:141] libmachine: Successfully made call to close driver server
I0317 12:42:25.575602 629808 main.go:141] libmachine: Making call to close connection to plugin binary
I0317 12:42:25.575616 629808 main.go:141] libmachine: Making call to close driver server
I0317 12:42:25.575624 629808 main.go:141] libmachine: (addons-012915) Calling .Close
I0317 12:42:25.575809 629808 main.go:141] libmachine: Successfully made call to close driver server
I0317 12:42:25.575816 629808 main.go:141] libmachine: Making call to close connection to plugin binary
I0317 12:42:25.576578 629808 main.go:141] libmachine: Successfully made call to close driver server
I0317 12:42:25.576591 629808 main.go:141] libmachine: Making call to close connection to plugin binary
I0317 12:42:25.582161 629808 pod_ready.go:103] pod "amd-gpu-device-plugin-5pkbv" in "kube-system" namespace has status "Ready":"False"
I0317 12:42:25.628507 629808 main.go:141] libmachine: Making call to close driver server
I0317 12:42:25.628539 629808 main.go:141] libmachine: (addons-012915) Calling .Close
I0317 12:42:25.628817 629808 main.go:141] libmachine: Successfully made call to close driver server
I0317 12:42:25.628838 629808 main.go:141] libmachine: Making call to close connection to plugin binary
I0317 12:42:25.628866 629808 main.go:141] libmachine: (addons-012915) DBG | Closing plugin on server side
I0317 12:42:26.794443 629808 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
I0317 12:42:26.794485 629808 main.go:141] libmachine: (addons-012915) Calling .GetSSHHostname
I0317 12:42:26.798095 629808 main.go:141] libmachine: (addons-012915) DBG | domain addons-012915 has defined MAC address 52:54:00:2b:05:f6 in network mk-addons-012915
I0317 12:42:26.798474 629808 main.go:141] libmachine: (addons-012915) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:05:f6", ip: ""} in network mk-addons-012915: {Iface:virbr1 ExpiryTime:2025-03-17 13:41:50 +0000 UTC Type:0 Mac:52:54:00:2b:05:f6 Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:addons-012915 Clientid:01:52:54:00:2b:05:f6}
I0317 12:42:26.798506 629808 main.go:141] libmachine: (addons-012915) DBG | domain addons-012915 has defined IP address 192.168.39.84 and MAC address 52:54:00:2b:05:f6 in network mk-addons-012915
I0317 12:42:26.798762 629808 main.go:141] libmachine: (addons-012915) Calling .GetSSHPort
I0317 12:42:26.798993 629808 main.go:141] libmachine: (addons-012915) Calling .GetSSHKeyPath
I0317 12:42:26.799159 629808 main.go:141] libmachine: (addons-012915) Calling .GetSSHUsername
I0317 12:42:26.799292 629808 sshutil.go:53] new ssh client: &{IP:192.168.39.84 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20539-621978/.minikube/machines/addons-012915/id_rsa Username:docker}
I0317 12:42:27.164818 629808 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (6.528149942s)
I0317 12:42:27.164874 629808 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (6.522968285s)
I0317 12:42:27.164892 629808 main.go:141] libmachine: Making call to close driver server
I0317 12:42:27.164906 629808 main.go:141] libmachine: (addons-012915) Calling .Close
I0317 12:42:27.164918 629808 main.go:141] libmachine: Making call to close driver server
I0317 12:42:27.164932 629808 main.go:141] libmachine: (addons-012915) Calling .Close
I0317 12:42:27.164956 629808 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (6.463136829s)
I0317 12:42:27.164983 629808 main.go:141] libmachine: Making call to close driver server
I0317 12:42:27.164997 629808 main.go:141] libmachine: (addons-012915) Calling .Close
I0317 12:42:27.165078 629808 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (6.376434924s)
I0317 12:42:27.165112 629808 main.go:141] libmachine: Making call to close driver server
I0317 12:42:27.165125 629808 main.go:141] libmachine: (addons-012915) Calling .Close
I0317 12:42:27.165126 629808 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (6.099553539s)
I0317 12:42:27.165198 629808 main.go:141] libmachine: Successfully made call to close driver server
I0317 12:42:27.165210 629808 main.go:141] libmachine: Making call to close connection to plugin binary
I0317 12:42:27.165219 629808 main.go:141] libmachine: Making call to close driver server
I0317 12:42:27.165223 629808 main.go:141] libmachine: (addons-012915) DBG | Closing plugin on server side
I0317 12:42:27.165228 629808 main.go:141] libmachine: Successfully made call to close driver server
I0317 12:42:27.165237 629808 main.go:141] libmachine: (addons-012915) Calling .Close
I0317 12:42:27.165240 629808 main.go:141] libmachine: Making call to close connection to plugin binary
I0317 12:42:27.165249 629808 main.go:141] libmachine: Making call to close driver server
I0317 12:42:27.165251 629808 main.go:141] libmachine: (addons-012915) DBG | Closing plugin on server side
I0317 12:42:27.165256 629808 main.go:141] libmachine: (addons-012915) Calling .Close
I0317 12:42:27.165344 629808 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (5.596500278s)
I0317 12:42:27.165368 629808 main.go:141] libmachine: Making call to close driver server
I0317 12:42:27.165387 629808 main.go:141] libmachine: (addons-012915) Calling .Close
I0317 12:42:27.165387 629808 main.go:141] libmachine: Making call to close driver server
I0317 12:42:27.165409 629808 main.go:141] libmachine: (addons-012915) Calling .Close
I0317 12:42:27.165536 629808 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (5.499341622s)
W0317 12:42:27.165575 629808 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
stdout:
customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
serviceaccount/snapshot-controller created
clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
deployment.apps/snapshot-controller created
stderr:
error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
ensure CRDs are installed first
I0317 12:42:27.165605 629808 retry.go:31] will retry after 176.800041ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
stdout:
customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
serviceaccount/snapshot-controller created
clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
deployment.apps/snapshot-controller created
stderr:
error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
ensure CRDs are installed first
I0317 12:42:27.165656 629808 main.go:141] libmachine: Successfully made call to close driver server
I0317 12:42:27.165671 629808 main.go:141] libmachine: Making call to close connection to plugin binary
I0317 12:42:27.166049 629808 main.go:141] libmachine: Successfully made call to close driver server
I0317 12:42:27.166066 629808 main.go:141] libmachine: Making call to close connection to plugin binary
I0317 12:42:27.166084 629808 main.go:141] libmachine: Successfully made call to close driver server
I0317 12:42:27.166096 629808 main.go:141] libmachine: Making call to close connection to plugin binary
I0317 12:42:27.166104 629808 main.go:141] libmachine: Making call to close driver server
I0317 12:42:27.166110 629808 main.go:141] libmachine: (addons-012915) Calling .Close
I0317 12:42:27.166155 629808 main.go:141] libmachine: Making call to close driver server
I0317 12:42:27.166161 629808 main.go:141] libmachine: (addons-012915) Calling .Close
I0317 12:42:27.166406 629808 main.go:141] libmachine: (addons-012915) DBG | Closing plugin on server side
I0317 12:42:27.166445 629808 main.go:141] libmachine: Successfully made call to close driver server
I0317 12:42:27.166452 629808 main.go:141] libmachine: Making call to close connection to plugin binary
I0317 12:42:27.166462 629808 addons.go:479] Verifying addon registry=true in "addons-012915"
I0317 12:42:27.167873 629808 main.go:141] libmachine: Successfully made call to close driver server
I0317 12:42:27.167896 629808 main.go:141] libmachine: Making call to close connection to plugin binary
I0317 12:42:27.167896 629808 main.go:141] libmachine: (addons-012915) DBG | Closing plugin on server side
I0317 12:42:27.167906 629808 main.go:141] libmachine: Making call to close driver server
I0317 12:42:27.167916 629808 main.go:141] libmachine: (addons-012915) Calling .Close
I0317 12:42:27.167923 629808 main.go:141] libmachine: (addons-012915) DBG | Closing plugin on server side
I0317 12:42:27.167942 629808 main.go:141] libmachine: Successfully made call to close driver server
I0317 12:42:27.167874 629808 main.go:141] libmachine: Successfully made call to close driver server
I0317 12:42:27.167970 629808 main.go:141] libmachine: Making call to close connection to plugin binary
I0317 12:42:27.167983 629808 addons.go:479] Verifying addon ingress=true in "addons-012915"
I0317 12:42:27.167960 629808 main.go:141] libmachine: Making call to close connection to plugin binary
I0317 12:42:27.168391 629808 main.go:141] libmachine: (addons-012915) DBG | Closing plugin on server side
I0317 12:42:27.168486 629808 main.go:141] libmachine: Successfully made call to close driver server
I0317 12:42:27.168508 629808 main.go:141] libmachine: Making call to close connection to plugin binary
I0317 12:42:27.169872 629808 out.go:177] * Verifying registry addon...
I0317 12:42:27.169903 629808 out.go:177] * Verifying ingress addon...
I0317 12:42:27.170046 629808 main.go:141] libmachine: (addons-012915) DBG | Closing plugin on server side
I0317 12:42:27.170061 629808 main.go:141] libmachine: Successfully made call to close driver server
I0317 12:42:27.170894 629808 main.go:141] libmachine: Making call to close connection to plugin binary
I0317 12:42:27.170911 629808 main.go:141] libmachine: Making call to close driver server
I0317 12:42:27.170920 629808 main.go:141] libmachine: (addons-012915) Calling .Close
I0317 12:42:27.171133 629808 main.go:141] libmachine: Successfully made call to close driver server
I0317 12:42:27.171150 629808 main.go:141] libmachine: Making call to close connection to plugin binary
I0317 12:42:27.171160 629808 addons.go:479] Verifying addon metrics-server=true in "addons-012915"
I0317 12:42:27.171239 629808 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
minikube -p addons-012915 service yakd-dashboard -n yakd-dashboard
I0317 12:42:27.172109 629808 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
I0317 12:42:27.172769 629808 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
I0317 12:42:27.177635 629808 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
I0317 12:42:27.177659 629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0317 12:42:27.199182 629808 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
I0317 12:42:27.199224 629808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0317 12:42:27.213504 629808 main.go:141] libmachine: Making call to close driver server
I0317 12:42:27.213528 629808 main.go:141] libmachine: (addons-012915) Calling .Close
I0317 12:42:27.213846 629808 main.go:141] libmachine: Successfully made call to close driver server
I0317 12:42:27.213868 629808 main.go:141] libmachine: Making call to close connection to plugin binary
I0317 12:42:27.213870 629808 main.go:141] libmachine: (addons-012915) DBG | Closing plugin on server side
I0317 12:42:27.257729 629808 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
I0317 12:42:27.338908 629808 addons.go:238] Setting addon gcp-auth=true in "addons-012915"
I0317 12:42:27.338979 629808 host.go:66] Checking if "addons-012915" exists ...
I0317 12:42:27.339273 629808 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0317 12:42:27.339321 629808 main.go:141] libmachine: Launching plugin server for driver kvm2
I0317 12:42:27.342552 629808 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
I0317 12:42:27.354754 629808 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45857
I0317 12:42:27.355186 629808 main.go:141] libmachine: () Calling .GetVersion
I0317 12:42:27.355639 629808 main.go:141] libmachine: Using API Version 1
I0317 12:42:27.355661 629808 main.go:141] libmachine: () Calling .SetConfigRaw
I0317 12:42:27.356015 629808 main.go:141] libmachine: () Calling .GetMachineName
I0317 12:42:27.356672 629808 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0317 12:42:27.356719 629808 main.go:141] libmachine: Launching plugin server for driver kvm2
I0317 12:42:27.371587 629808 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45909
I0317 12:42:27.372069 629808 main.go:141] libmachine: () Calling .GetVersion
I0317 12:42:27.372531 629808 main.go:141] libmachine: Using API Version 1
I0317 12:42:27.372554 629808 main.go:141] libmachine: () Calling .SetConfigRaw
I0317 12:42:27.372950 629808 main.go:141] libmachine: () Calling .GetMachineName
I0317 12:42:27.373153 629808 main.go:141] libmachine: (addons-012915) Calling .GetState
I0317 12:42:27.374804 629808 main.go:141] libmachine: (addons-012915) Calling .DriverName
I0317 12:42:27.375036 629808 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
I0317 12:42:27.375061 629808 main.go:141] libmachine: (addons-012915) Calling .GetSSHHostname
I0317 12:42:27.377884 629808 main.go:141] libmachine: (addons-012915) DBG | domain addons-012915 has defined MAC address 52:54:00:2b:05:f6 in network mk-addons-012915
I0317 12:42:27.378284 629808 main.go:141] libmachine: (addons-012915) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:05:f6", ip: ""} in network mk-addons-012915: {Iface:virbr1 ExpiryTime:2025-03-17 13:41:50 +0000 UTC Type:0 Mac:52:54:00:2b:05:f6 Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:addons-012915 Clientid:01:52:54:00:2b:05:f6}
I0317 12:42:27.378311 629808 main.go:141] libmachine: (addons-012915) DBG | domain addons-012915 has defined IP address 192.168.39.84 and MAC address 52:54:00:2b:05:f6 in network mk-addons-012915
I0317 12:42:27.378514 629808 main.go:141] libmachine: (addons-012915) Calling .GetSSHPort
I0317 12:42:27.378705 629808 main.go:141] libmachine: (addons-012915) Calling .GetSSHKeyPath
I0317 12:42:27.378881 629808 main.go:141] libmachine: (addons-012915) Calling .GetSSHUsername
I0317 12:42:27.379044 629808 sshutil.go:53] new ssh client: &{IP:192.168.39.84 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20539-621978/.minikube/machines/addons-012915/id_rsa Username:docker}
I0317 12:42:27.677030 629808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0317 12:42:27.677083 629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0317 12:42:28.029599 629808 pod_ready.go:103] pod "amd-gpu-device-plugin-5pkbv" in "kube-system" namespace has status "Ready":"False"
I0317 12:42:28.183837 629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0317 12:42:28.183933 629808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0317 12:42:28.431420 629808 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (5.575421913s)
I0317 12:42:28.431480 629808 main.go:141] libmachine: Making call to close driver server
I0317 12:42:28.431501 629808 main.go:141] libmachine: (addons-012915) Calling .Close
I0317 12:42:28.431819 629808 main.go:141] libmachine: Successfully made call to close driver server
I0317 12:42:28.431838 629808 main.go:141] libmachine: Making call to close connection to plugin binary
I0317 12:42:28.431847 629808 main.go:141] libmachine: Making call to close driver server
I0317 12:42:28.431855 629808 main.go:141] libmachine: (addons-012915) Calling .Close
I0317 12:42:28.432119 629808 main.go:141] libmachine: Successfully made call to close driver server
I0317 12:42:28.432133 629808 main.go:141] libmachine: Making call to close connection to plugin binary
I0317 12:42:28.432145 629808 addons.go:479] Verifying addon csi-hostpath-driver=true in "addons-012915"
I0317 12:42:28.433650 629808 out.go:177] * Verifying csi-hostpath-driver addon...
I0317 12:42:28.435688 629808 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I0317 12:42:28.442551 629808 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I0317 12:42:28.442565 629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0317 12:42:28.680986 629808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0317 12:42:28.680984 629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0317 12:42:28.940881 629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0317 12:42:29.176120 629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0317 12:42:29.177109 629808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0317 12:42:29.325544 629808 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.982873895s)
I0317 12:42:29.325638 629808 main.go:141] libmachine: Making call to close driver server
I0317 12:42:29.325639 629808 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (1.950584255s)
I0317 12:42:29.325661 629808 main.go:141] libmachine: (addons-012915) Calling .Close
I0317 12:42:29.327827 629808 main.go:141] libmachine: Successfully made call to close driver server
I0317 12:42:29.327920 629808 main.go:141] libmachine: (addons-012915) DBG | Closing plugin on server side
I0317 12:42:29.327936 629808 main.go:141] libmachine: Making call to close connection to plugin binary
I0317 12:42:29.327957 629808 main.go:141] libmachine: Making call to close driver server
I0317 12:42:29.328547 629808 main.go:141] libmachine: (addons-012915) Calling .Close
I0317 12:42:29.328704 629808 out.go:177] - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
I0317 12:42:29.328828 629808 main.go:141] libmachine: Successfully made call to close driver server
I0317 12:42:29.328878 629808 main.go:141] libmachine: Making call to close connection to plugin binary
I0317 12:42:29.328849 629808 main.go:141] libmachine: (addons-012915) DBG | Closing plugin on server side
I0317 12:42:29.331392 629808 out.go:177] - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
I0317 12:42:29.332532 629808 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
I0317 12:42:29.332561 629808 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
I0317 12:42:29.401043 629808 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-service.yaml
I0317 12:42:29.401083 629808 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
I0317 12:42:29.438225 629808 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
I0317 12:42:29.438258 629808 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
I0317 12:42:29.440504 629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0317 12:42:29.461366 629808 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
I0317 12:42:29.675195 629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0317 12:42:29.676295 629808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0317 12:42:29.939006 629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0317 12:42:30.031059 629808 pod_ready.go:103] pod "amd-gpu-device-plugin-5pkbv" in "kube-system" namespace has status "Ready":"False"
I0317 12:42:30.175990 629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0317 12:42:30.176498 629808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0317 12:42:30.469448 629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0317 12:42:30.552315 629808 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.090899518s)
I0317 12:42:30.552377 629808 main.go:141] libmachine: Making call to close driver server
I0317 12:42:30.552388 629808 main.go:141] libmachine: (addons-012915) Calling .Close
I0317 12:42:30.552696 629808 main.go:141] libmachine: Successfully made call to close driver server
I0317 12:42:30.552722 629808 main.go:141] libmachine: Making call to close connection to plugin binary
I0317 12:42:30.552731 629808 main.go:141] libmachine: Making call to close driver server
I0317 12:42:30.552730 629808 main.go:141] libmachine: (addons-012915) DBG | Closing plugin on server side
I0317 12:42:30.552738 629808 main.go:141] libmachine: (addons-012915) Calling .Close
I0317 12:42:30.553015 629808 main.go:141] libmachine: Successfully made call to close driver server
I0317 12:42:30.553064 629808 main.go:141] libmachine: Making call to close connection to plugin binary
I0317 12:42:30.555013 629808 addons.go:479] Verifying addon gcp-auth=true in "addons-012915"
I0317 12:42:30.556730 629808 out.go:177] * Verifying gcp-auth addon...
I0317 12:42:30.558828 629808 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
I0317 12:42:30.622204 629808 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
I0317 12:42:30.622235 629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0317 12:42:30.678439 629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0317 12:42:30.679400 629808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0317 12:42:30.939775 629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0317 12:42:31.062414 629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0317 12:42:31.176312 629808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0317 12:42:31.176920 629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0317 12:42:31.440287 629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0317 12:42:31.561808 629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0317 12:42:31.676310 629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0317 12:42:31.676358 629808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0317 12:42:31.939634 629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0317 12:42:32.061464 629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0317 12:42:32.176322 629808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0317 12:42:32.176412 629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0317 12:42:32.440402 629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0317 12:42:32.531758 629808 pod_ready.go:103] pod "amd-gpu-device-plugin-5pkbv" in "kube-system" namespace has status "Ready":"False"
I0317 12:42:32.562371 629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0317 12:42:32.676892 629808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0317 12:42:32.677047 629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0317 12:42:32.939380 629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0317 12:42:33.062362 629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0317 12:42:33.176279 629808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0317 12:42:33.176347 629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0317 12:42:33.440019 629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0317 12:42:33.561384 629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0317 12:42:33.675452 629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0317 12:42:33.676698 629808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0317 12:42:33.938913 629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0317 12:42:34.061750 629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0317 12:42:34.176662 629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0317 12:42:34.176788 629808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0317 12:42:34.439125 629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0317 12:42:34.531837 629808 pod_ready.go:103] pod "amd-gpu-device-plugin-5pkbv" in "kube-system" namespace has status "Ready":"False"
I0317 12:42:34.562453 629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0317 12:42:34.675627 629808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0317 12:42:34.676407 629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0317 12:42:34.940603 629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0317 12:42:35.063505 629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0317 12:42:35.176258 629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0317 12:42:35.176291 629808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0317 12:42:35.440938 629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0317 12:42:35.562777 629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0317 12:42:35.677167 629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0317 12:42:35.677299 629808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0317 12:42:35.940075 629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0317 12:42:36.062337 629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0317 12:42:36.175807 629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0317 12:42:36.177068 629808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0317 12:42:36.440330 629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0317 12:42:36.532236 629808 pod_ready.go:103] pod "amd-gpu-device-plugin-5pkbv" in "kube-system" namespace has status "Ready":"False"
I0317 12:42:36.562553 629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0317 12:42:36.675869 629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0317 12:42:36.676077 629808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0317 12:42:36.939587 629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0317 12:42:37.063040 629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0317 12:42:37.176394 629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0317 12:42:37.176520 629808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0317 12:42:37.440253 629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0317 12:42:37.562502 629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0317 12:42:37.675630 629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0317 12:42:37.676184 629808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0317 12:42:37.939384 629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0317 12:42:38.061640 629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0317 12:42:38.176072 629808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0317 12:42:38.176183 629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0317 12:42:38.439348 629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0317 12:42:38.561954 629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0317 12:42:38.676913 629808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0317 12:42:38.676938 629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0317 12:42:38.939643 629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0317 12:42:39.422449 629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0317 12:42:39.422586 629808 pod_ready.go:103] pod "amd-gpu-device-plugin-5pkbv" in "kube-system" namespace has status "Ready":"False"
I0317 12:42:39.422686 629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0317 12:42:39.422975 629808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0317 12:42:39.520636 629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0317 12:42:39.562354 629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0317 12:42:39.675094 629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0317 12:42:39.675924 629808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0317 12:42:39.940446 629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0317 12:42:40.062659 629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0317 12:42:40.175720 629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0317 12:42:40.175995 629808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0317 12:42:40.439334 629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0317 12:42:40.562819 629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0317 12:42:40.677578 629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0317 12:42:40.677791 629808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0317 12:42:40.939307 629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0317 12:42:41.061692 629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0317 12:42:41.175519 629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0317 12:42:41.175887 629808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0317 12:42:41.438954 629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0317 12:42:41.530801 629808 pod_ready.go:103] pod "amd-gpu-device-plugin-5pkbv" in "kube-system" namespace has status "Ready":"False"
I0317 12:42:41.562309 629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0317 12:42:41.675437 629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0317 12:42:41.676475 629808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0317 12:42:41.939835 629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0317 12:42:42.062175 629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0317 12:42:42.174934 629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0317 12:42:42.176275 629808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0317 12:42:42.439362 629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0317 12:42:42.561196 629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0317 12:42:42.675522 629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0317 12:42:42.675906 629808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0317 12:42:42.939700 629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0317 12:42:43.478616 629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0317 12:42:43.479087 629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0317 12:42:43.479175 629808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0317 12:42:43.482070 629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0317 12:42:43.531269 629808 pod_ready.go:103] pod "amd-gpu-device-plugin-5pkbv" in "kube-system" namespace has status "Ready":"False"
I0317 12:42:43.580013 629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0317 12:42:43.680790 629808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0317 12:42:43.680825 629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0317 12:42:43.938793 629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0317 12:42:44.062400 629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0317 12:42:44.174903 629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0317 12:42:44.176586 629808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0317 12:42:44.439953 629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0317 12:42:44.564885 629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0317 12:42:44.677004 629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0317 12:42:44.677958 629808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0317 12:42:44.939413 629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0317 12:42:45.062087 629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0317 12:42:45.177076 629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0317 12:42:45.177120 629808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0317 12:42:45.439576 629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0317 12:42:45.561384 629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0317 12:42:45.675484 629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0317 12:42:45.675493 629808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0317 12:42:45.939208 629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0317 12:42:46.031082 629808 pod_ready.go:103] pod "amd-gpu-device-plugin-5pkbv" in "kube-system" namespace has status "Ready":"False"
I0317 12:42:46.062369 629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0317 12:42:46.175314 629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0317 12:42:46.179352 629808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0317 12:42:46.439784 629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0317 12:42:46.562554 629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0317 12:42:46.676489 629808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0317 12:42:46.677442 629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0317 12:42:46.940295 629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0317 12:42:47.061680 629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0317 12:42:47.175742 629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0317 12:42:47.176200 629808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0317 12:42:47.439919 629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0317 12:42:47.562915 629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0317 12:42:47.676865 629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0317 12:42:47.677284 629808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0317 12:42:47.940421 629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0317 12:42:48.031481 629808 pod_ready.go:103] pod "amd-gpu-device-plugin-5pkbv" in "kube-system" namespace has status "Ready":"False"
I0317 12:42:48.062882 629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0317 12:42:48.176012 629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0317 12:42:48.176584 629808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0317 12:42:48.439737 629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0317 12:42:48.562843 629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0317 12:42:48.676535 629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0317 12:42:48.676536 629808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0317 12:42:48.938590 629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0317 12:42:49.062279 629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0317 12:42:49.175389 629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0317 12:42:49.176884 629808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0317 12:42:49.440138 629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0317 12:42:49.563016 629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0317 12:42:49.677039 629808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0317 12:42:49.677038 629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0317 12:42:49.938974 629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0317 12:42:50.061896 629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0317 12:42:50.176902 629808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0317 12:42:50.176970 629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0317 12:42:50.439317 629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0317 12:42:50.531001 629808 pod_ready.go:103] pod "amd-gpu-device-plugin-5pkbv" in "kube-system" namespace has status "Ready":"False"
I0317 12:42:50.561407 629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0317 12:42:50.675227 629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0317 12:42:50.675368 629808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0317 12:42:50.940528 629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0317 12:42:51.069121 629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0317 12:42:51.176732 629808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0317 12:42:51.176735 629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0317 12:42:51.439065 629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0317 12:42:51.562561 629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0317 12:42:51.675384 629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0317 12:42:51.676629 629808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0317 12:42:51.939444 629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0317 12:42:52.062171 629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0317 12:42:52.174863 629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0317 12:42:52.175705 629808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0317 12:42:52.438522 629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0317 12:42:52.531362 629808 pod_ready.go:103] pod "amd-gpu-device-plugin-5pkbv" in "kube-system" namespace has status "Ready":"False"
I0317 12:42:52.562292 629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0317 12:42:52.674982 629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0317 12:42:52.676730 629808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0317 12:42:52.938824 629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0317 12:42:53.062392 629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0317 12:42:53.176496 629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0317 12:42:53.176519 629808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0317 12:42:53.440042 629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0317 12:42:53.561654 629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0317 12:42:53.675762 629808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0317 12:42:53.676165 629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0317 12:42:53.942279 629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0317 12:42:54.031943 629808 pod_ready.go:93] pod "amd-gpu-device-plugin-5pkbv" in "kube-system" namespace has status "Ready":"True"
I0317 12:42:54.031969 629808 pod_ready.go:82] duration metric: took 30.506379989s for pod "amd-gpu-device-plugin-5pkbv" in "kube-system" namespace to be "Ready" ...
I0317 12:42:54.031979 629808 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-jxb6r" in "kube-system" namespace to be "Ready" ...
I0317 12:42:54.033529 629808 pod_ready.go:98] error getting pod "coredns-668d6bf9bc-jxb6r" in "kube-system" namespace (skipping!): pods "coredns-668d6bf9bc-jxb6r" not found
I0317 12:42:54.033554 629808 pod_ready.go:82] duration metric: took 1.568065ms for pod "coredns-668d6bf9bc-jxb6r" in "kube-system" namespace to be "Ready" ...
E0317 12:42:54.033563 629808 pod_ready.go:67] WaitExtra: waitPodCondition: error getting pod "coredns-668d6bf9bc-jxb6r" in "kube-system" namespace (skipping!): pods "coredns-668d6bf9bc-jxb6r" not found
I0317 12:42:54.033572 629808 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-z7dq4" in "kube-system" namespace to be "Ready" ...
I0317 12:42:54.038119 629808 pod_ready.go:93] pod "coredns-668d6bf9bc-z7dq4" in "kube-system" namespace has status "Ready":"True"
I0317 12:42:54.038138 629808 pod_ready.go:82] duration metric: took 4.557967ms for pod "coredns-668d6bf9bc-z7dq4" in "kube-system" namespace to be "Ready" ...
I0317 12:42:54.038148 629808 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-012915" in "kube-system" namespace to be "Ready" ...
I0317 12:42:54.042147 629808 pod_ready.go:93] pod "etcd-addons-012915" in "kube-system" namespace has status "Ready":"True"
I0317 12:42:54.042163 629808 pod_ready.go:82] duration metric: took 4.010086ms for pod "etcd-addons-012915" in "kube-system" namespace to be "Ready" ...
I0317 12:42:54.042171 629808 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-012915" in "kube-system" namespace to be "Ready" ...
I0317 12:42:54.045539 629808 pod_ready.go:93] pod "kube-apiserver-addons-012915" in "kube-system" namespace has status "Ready":"True"
I0317 12:42:54.045556 629808 pod_ready.go:82] duration metric: took 3.379345ms for pod "kube-apiserver-addons-012915" in "kube-system" namespace to be "Ready" ...
I0317 12:42:54.045564 629808 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-012915" in "kube-system" namespace to be "Ready" ...
I0317 12:42:54.061403 629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0317 12:42:54.175296 629808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0317 12:42:54.175361 629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0317 12:42:54.230108 629808 pod_ready.go:93] pod "kube-controller-manager-addons-012915" in "kube-system" namespace has status "Ready":"True"
I0317 12:42:54.230135 629808 pod_ready.go:82] duration metric: took 184.565575ms for pod "kube-controller-manager-addons-012915" in "kube-system" namespace to be "Ready" ...
I0317 12:42:54.230148 629808 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-gfpml" in "kube-system" namespace to be "Ready" ...
I0317 12:42:54.438989 629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0317 12:42:54.562061 629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0317 12:42:54.629571 629808 pod_ready.go:93] pod "kube-proxy-gfpml" in "kube-system" namespace has status "Ready":"True"
I0317 12:42:54.629598 629808 pod_ready.go:82] duration metric: took 399.442811ms for pod "kube-proxy-gfpml" in "kube-system" namespace to be "Ready" ...
I0317 12:42:54.629611 629808 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-012915" in "kube-system" namespace to be "Ready" ...
I0317 12:42:54.675395 629808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0317 12:42:54.676040 629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0317 12:42:54.940219 629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0317 12:42:55.029161 629808 pod_ready.go:93] pod "kube-scheduler-addons-012915" in "kube-system" namespace has status "Ready":"True"
I0317 12:42:55.029189 629808 pod_ready.go:82] duration metric: took 399.569707ms for pod "kube-scheduler-addons-012915" in "kube-system" namespace to be "Ready" ...
I0317 12:42:55.029204 629808 pod_ready.go:79] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-gr4p2" in "kube-system" namespace to be "Ready" ...
I0317 12:42:55.061870 629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0317 12:42:55.176167 629808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0317 12:42:55.176890 629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0317 12:42:55.429753 629808 pod_ready.go:93] pod "nvidia-device-plugin-daemonset-gr4p2" in "kube-system" namespace has status "Ready":"True"
I0317 12:42:55.429780 629808 pod_ready.go:82] duration metric: took 400.567661ms for pod "nvidia-device-plugin-daemonset-gr4p2" in "kube-system" namespace to be "Ready" ...
I0317 12:42:55.429791 629808 pod_ready.go:39] duration metric: took 32.047481057s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0317 12:42:55.429817 629808 api_server.go:52] waiting for apiserver process to appear ...
I0317 12:42:55.429887 629808 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0317 12:42:55.438618 629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0317 12:42:55.467640 629808 api_server.go:72] duration metric: took 35.64073254s to wait for apiserver process to appear ...
I0317 12:42:55.467677 629808 api_server.go:88] waiting for apiserver healthz status ...
I0317 12:42:55.467704 629808 api_server.go:253] Checking apiserver healthz at https://192.168.39.84:8443/healthz ...
I0317 12:42:55.471885 629808 api_server.go:279] https://192.168.39.84:8443/healthz returned 200:
ok
I0317 12:42:55.472829 629808 api_server.go:141] control plane version: v1.32.2
I0317 12:42:55.472852 629808 api_server.go:131] duration metric: took 5.168362ms to wait for apiserver health ...
I0317 12:42:55.472860 629808 system_pods.go:43] waiting for kube-system pods to appear ...
I0317 12:42:55.562345 629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0317 12:42:55.631728 629808 system_pods.go:59] 18 kube-system pods found
I0317 12:42:55.631787 629808 system_pods.go:61] "amd-gpu-device-plugin-5pkbv" [8713b029-97c0-4a95-a703-886b238a1cf1] Running
I0317 12:42:55.631799 629808 system_pods.go:61] "coredns-668d6bf9bc-z7dq4" [0a5b10dc-42b3-4a25-9f03-222b3324baf9] Running
I0317 12:42:55.631810 629808 system_pods.go:61] "csi-hostpath-attacher-0" [592324f8-e091-4a7a-a486-93d54a56c0f1] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
I0317 12:42:55.631819 629808 system_pods.go:61] "csi-hostpath-resizer-0" [610b15f2-2636-4b8f-9363-d7c6eca55342] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
I0317 12:42:55.631834 629808 system_pods.go:61] "csi-hostpathplugin-n92cl" [6895b42e-cbbb-4c89-93d5-601d91db4e4e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
I0317 12:42:55.631846 629808 system_pods.go:61] "etcd-addons-012915" [8e4cb301-ecab-427b-af78-4451c425dc9e] Running
I0317 12:42:55.631853 629808 system_pods.go:61] "kube-apiserver-addons-012915" [7a0533b2-7ed1-4fb2-9377-e781b3a3b8a4] Running
I0317 12:42:55.631861 629808 system_pods.go:61] "kube-controller-manager-addons-012915" [221489e4-a706-4054-817b-097f795a4c7b] Running
I0317 12:42:55.631867 629808 system_pods.go:61] "kube-ingress-dns-minikube" [7fc15951-f543-49f2-aa75-cfec8ee9f60a] Running
I0317 12:42:55.631875 629808 system_pods.go:61] "kube-proxy-gfpml" [c443023b-cd1a-4c68-95ea-21e945f88e15] Running
I0317 12:42:55.631881 629808 system_pods.go:61] "kube-scheduler-addons-012915" [42e2b951-6247-4322-9e01-755f18bd2c8f] Running
I0317 12:42:55.631894 629808 system_pods.go:61] "metrics-server-7fbb699795-p2svs" [7cb96dd5-6d04-4b62-a0c5-af14472757d1] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
I0317 12:42:55.631900 629808 system_pods.go:61] "nvidia-device-plugin-daemonset-gr4p2" [c678dd53-1e45-417a-b06d-c754b6a9ace2] Running
I0317 12:42:55.631909 629808 system_pods.go:61] "registry-6c88467877-8k6gk" [f211c29d-606d-447b-a8fa-69017766f2db] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
I0317 12:42:55.631921 629808 system_pods.go:61] "registry-proxy-7r5g2" [ada328aa-4416-4e30-a5df-7dc790f2663a] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
I0317 12:42:55.631932 629808 system_pods.go:61] "snapshot-controller-68b874b76f-6rz9m" [10d4e550-08b3-4311-afc0-fbaa5490aa26] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
I0317 12:42:55.631938 629808 system_pods.go:61] "snapshot-controller-68b874b76f-9q74h" [a32b085e-c793-429b-9098-8df28c689c6d] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
I0317 12:42:55.631947 629808 system_pods.go:61] "storage-provisioner" [c7f99af7-bb01-4504-ac70-77dc8ab04b3e] Running
I0317 12:42:55.631958 629808 system_pods.go:74] duration metric: took 159.091098ms to wait for pod list to return data ...
I0317 12:42:55.631973 629808 default_sa.go:34] waiting for default service account to be created ...
I0317 12:42:55.676919 629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0317 12:42:55.678428 629808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0317 12:42:55.829977 629808 default_sa.go:45] found service account: "default"
I0317 12:42:55.830002 629808 default_sa.go:55] duration metric: took 198.018531ms for default service account to be created ...
I0317 12:42:55.830010 629808 system_pods.go:116] waiting for k8s-apps to be running ...
I0317 12:42:55.940797 629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0317 12:42:56.043626 629808 system_pods.go:86] 18 kube-system pods found
I0317 12:42:56.043656 629808 system_pods.go:89] "amd-gpu-device-plugin-5pkbv" [8713b029-97c0-4a95-a703-886b238a1cf1] Running
I0317 12:42:56.043662 629808 system_pods.go:89] "coredns-668d6bf9bc-z7dq4" [0a5b10dc-42b3-4a25-9f03-222b3324baf9] Running
I0317 12:42:56.043669 629808 system_pods.go:89] "csi-hostpath-attacher-0" [592324f8-e091-4a7a-a486-93d54a56c0f1] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
I0317 12:42:56.043676 629808 system_pods.go:89] "csi-hostpath-resizer-0" [610b15f2-2636-4b8f-9363-d7c6eca55342] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
I0317 12:42:56.043683 629808 system_pods.go:89] "csi-hostpathplugin-n92cl" [6895b42e-cbbb-4c89-93d5-601d91db4e4e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
I0317 12:42:56.043688 629808 system_pods.go:89] "etcd-addons-012915" [8e4cb301-ecab-427b-af78-4451c425dc9e] Running
I0317 12:42:56.043691 629808 system_pods.go:89] "kube-apiserver-addons-012915" [7a0533b2-7ed1-4fb2-9377-e781b3a3b8a4] Running
I0317 12:42:56.043695 629808 system_pods.go:89] "kube-controller-manager-addons-012915" [221489e4-a706-4054-817b-097f795a4c7b] Running
I0317 12:42:56.043702 629808 system_pods.go:89] "kube-ingress-dns-minikube" [7fc15951-f543-49f2-aa75-cfec8ee9f60a] Running
I0317 12:42:56.043705 629808 system_pods.go:89] "kube-proxy-gfpml" [c443023b-cd1a-4c68-95ea-21e945f88e15] Running
I0317 12:42:56.043711 629808 system_pods.go:89] "kube-scheduler-addons-012915" [42e2b951-6247-4322-9e01-755f18bd2c8f] Running
I0317 12:42:56.043715 629808 system_pods.go:89] "metrics-server-7fbb699795-p2svs" [7cb96dd5-6d04-4b62-a0c5-af14472757d1] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
I0317 12:42:56.043721 629808 system_pods.go:89] "nvidia-device-plugin-daemonset-gr4p2" [c678dd53-1e45-417a-b06d-c754b6a9ace2] Running
I0317 12:42:56.043726 629808 system_pods.go:89] "registry-6c88467877-8k6gk" [f211c29d-606d-447b-a8fa-69017766f2db] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
I0317 12:42:56.043733 629808 system_pods.go:89] "registry-proxy-7r5g2" [ada328aa-4416-4e30-a5df-7dc790f2663a] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
I0317 12:42:56.043740 629808 system_pods.go:89] "snapshot-controller-68b874b76f-6rz9m" [10d4e550-08b3-4311-afc0-fbaa5490aa26] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
I0317 12:42:56.043748 629808 system_pods.go:89] "snapshot-controller-68b874b76f-9q74h" [a32b085e-c793-429b-9098-8df28c689c6d] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
I0317 12:42:56.043751 629808 system_pods.go:89] "storage-provisioner" [c7f99af7-bb01-4504-ac70-77dc8ab04b3e] Running
I0317 12:42:56.043759 629808 system_pods.go:126] duration metric: took 213.743605ms to wait for k8s-apps to be running ...
I0317 12:42:56.043771 629808 system_svc.go:44] waiting for kubelet service to be running ....
I0317 12:42:56.043818 629808 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I0317 12:42:56.057578 629808 system_svc.go:56] duration metric: took 13.79821ms WaitForService to wait for kubelet
I0317 12:42:56.057607 629808 kubeadm.go:582] duration metric: took 36.230706889s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I0317 12:42:56.057628 629808 node_conditions.go:102] verifying NodePressure condition ...
I0317 12:42:56.062024 629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0317 12:42:56.176645 629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0317 12:42:56.176824 629808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0317 12:42:56.229839 629808 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
I0317 12:42:56.229878 629808 node_conditions.go:123] node cpu capacity is 2
I0317 12:42:56.229897 629808 node_conditions.go:105] duration metric: took 172.262786ms to run NodePressure ...
I0317 12:42:56.229914 629808 start.go:241] waiting for startup goroutines ...
I0317 12:42:56.438910 629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0317 12:42:56.562508 629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0317 12:42:56.676230 629808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0317 12:42:56.676320 629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0317 12:42:56.940012 629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0317 12:42:57.061619 629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0317 12:42:57.175907 629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0317 12:42:57.175914 629808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0317 12:42:57.439014 629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0317 12:42:57.561593 629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0317 12:42:57.676244 629808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0317 12:42:57.676248 629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0317 12:42:57.939781 629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0317 12:42:58.062264 629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0317 12:42:58.174978 629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0317 12:42:58.176486 629808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0317 12:42:58.439452 629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0317 12:42:58.563294 629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0317 12:42:58.674767 629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0317 12:42:58.676491 629808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0317 12:42:58.939985 629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0317 12:42:59.061728 629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0317 12:42:59.175581 629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0317 12:42:59.176202 629808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0317 12:42:59.439139 629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0317 12:42:59.561775 629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0317 12:42:59.676897 629808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0317 12:42:59.676966 629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0317 12:42:59.938738 629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0317 12:43:00.061828 629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0317 12:43:00.176279 629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0317 12:43:00.176509 629808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0317 12:43:00.439982 629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0317 12:43:00.562510 629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0317 12:43:00.675153 629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0317 12:43:00.675373 629808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0317 12:43:00.941154 629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0317 12:43:01.062240 629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0317 12:43:01.175203 629808 kapi.go:107] duration metric: took 34.003090918s to wait for kubernetes.io/minikube-addons=registry ...
I0317 12:43:01.176489 629808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0317 12:43:01.439722 629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0317 12:43:01.562728 629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0317 12:43:01.677533 629808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0317 12:43:01.940150 629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0317 12:43:02.061882 629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0317 12:43:02.175722 629808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0317 12:43:02.439056 629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0317 12:43:02.563380 629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0317 12:43:02.676353 629808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0317 12:43:02.941926 629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0317 12:43:03.063082 629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0317 12:43:03.176888 629808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0317 12:43:03.439973 629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0317 12:43:03.561493 629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0317 12:43:03.676323 629808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0317 12:43:03.946628 629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0317 12:43:04.067290 629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0317 12:43:04.177103 629808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0317 12:43:04.440068 629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0317 12:43:04.561692 629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0317 12:43:04.677115 629808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0317 12:43:04.939043 629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0317 12:43:05.062051 629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0317 12:43:05.176129 629808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0317 12:43:05.438972 629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0317 12:43:05.561443 629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0317 12:43:05.676802 629808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0317 12:43:05.939511 629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0317 12:43:06.068338 629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0317 12:43:06.176081 629808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0317 12:43:06.439696 629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0317 12:43:06.562779 629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0317 12:43:06.676945 629808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0317 12:43:06.938982 629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0317 12:43:07.061620 629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0317 12:43:07.178671 629808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0317 12:43:07.439145 629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0317 12:43:07.561670 629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0317 12:43:07.676664 629808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0317 12:43:07.941374 629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0317 12:43:08.062367 629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0317 12:43:08.176032 629808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0317 12:43:08.438905 629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0317 12:43:08.562267 629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0317 12:43:08.676461 629808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0317 12:43:08.942134 629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0317 12:43:09.062154 629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0317 12:43:09.176280 629808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0317 12:43:09.439358 629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0317 12:43:09.562005 629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0317 12:43:09.686123 629808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0317 12:43:09.938651 629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0317 12:43:10.062425 629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0317 12:43:10.176445 629808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0317 12:43:10.439544 629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0317 12:43:10.563624 629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0317 12:43:10.676565 629808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0317 12:43:10.939876 629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0317 12:43:11.061302 629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0317 12:43:11.176393 629808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0317 12:43:11.439962 629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0317 12:43:11.561779 629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0317 12:43:11.676627 629808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0317 12:43:11.939823 629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0317 12:43:12.062978 629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0317 12:43:12.175763 629808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0317 12:43:12.438615 629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0317 12:43:12.561953 629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0317 12:43:12.675631 629808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0317 12:43:12.940007 629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0317 12:43:13.061911 629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0317 12:43:13.176330 629808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0317 12:43:13.442420 629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0317 12:43:13.561312 629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0317 12:43:13.676156 629808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0317 12:43:13.940588 629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0317 12:43:14.062184 629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0317 12:43:14.176082 629808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0317 12:43:14.474140 629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0317 12:43:14.562780 629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0317 12:43:14.676734 629808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0317 12:43:14.938963 629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0317 12:43:15.063222 629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0317 12:43:15.177412 629808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0317 12:43:15.439653 629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0317 12:43:15.562450 629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0317 12:43:15.680006 629808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0317 12:43:15.938519 629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0317 12:43:16.062237 629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0317 12:43:16.176354 629808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0317 12:43:16.439392 629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0317 12:43:16.562004 629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0317 12:43:16.677517 629808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0317 12:43:16.939438 629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0317 12:43:17.061683 629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0317 12:43:17.175900 629808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0317 12:43:17.438932 629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0317 12:43:17.562028 629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0317 12:43:17.678255 629808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0317 12:43:17.946145 629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0317 12:43:18.062239 629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0317 12:43:18.176537 629808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0317 12:43:18.447757 629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0317 12:43:18.562943 629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0317 12:43:18.682327 629808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0317 12:43:18.940584 629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0317 12:43:19.062392 629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0317 12:43:19.179940 629808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0317 12:43:19.439580 629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0317 12:43:19.562303 629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0317 12:43:19.676387 629808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0317 12:43:19.941886 629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0317 12:43:20.134778 629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0317 12:43:20.177786 629808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0317 12:43:20.440918 629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0317 12:43:20.563067 629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0317 12:43:20.676935 629808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0317 12:43:20.939933 629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0317 12:43:21.062106 629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0317 12:43:21.184568 629808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0317 12:43:21.440225 629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0317 12:43:21.562623 629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0317 12:43:21.677019 629808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0317 12:43:21.939438 629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0317 12:43:22.062129 629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0317 12:43:22.180344 629808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0317 12:43:22.439128 629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0317 12:43:22.561307 629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0317 12:43:22.675935 629808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0317 12:43:22.941259 629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0317 12:43:23.061959 629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0317 12:43:23.176033 629808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0317 12:43:23.696429 629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0317 12:43:23.696602 629808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0317 12:43:23.696809 629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0317 12:43:23.940937 629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0317 12:43:24.061737 629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0317 12:43:24.176675 629808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0317 12:43:24.439759 629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0317 12:43:24.562432 629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0317 12:43:24.676494 629808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0317 12:43:24.939205 629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0317 12:43:25.062547 629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0317 12:43:25.176786 629808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0317 12:43:25.439248 629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0317 12:43:25.562636 629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0317 12:43:25.677108 629808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0317 12:43:25.940074 629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0317 12:43:26.061776 629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0317 12:43:26.176692 629808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0317 12:43:26.440251 629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0317 12:43:26.563675 629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0317 12:43:26.676954 629808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0317 12:43:26.939684 629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0317 12:43:27.062084 629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0317 12:43:27.176204 629808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0317 12:43:27.439178 629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0317 12:43:27.561800 629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0317 12:43:27.676859 629808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0317 12:43:27.939582 629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0317 12:43:28.063379 629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0317 12:43:28.176732 629808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0317 12:43:28.438679 629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0317 12:43:28.561952 629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0317 12:43:28.681125 629808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0317 12:43:28.939970 629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0317 12:43:29.061454 629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0317 12:43:29.176286 629808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0317 12:43:29.439957 629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0317 12:43:29.562490 629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0317 12:43:29.676226 629808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0317 12:43:29.939097 629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0317 12:43:30.061971 629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0317 12:43:30.181859 629808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0317 12:43:30.438649 629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0317 12:43:30.562346 629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0317 12:43:30.676435 629808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0317 12:43:30.943077 629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0317 12:43:31.061500 629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0317 12:43:31.176532 629808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0317 12:43:31.439506 629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0317 12:43:31.562394 629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0317 12:43:31.676174 629808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0317 12:43:31.939759 629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0317 12:43:32.062122 629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0317 12:43:32.178470 629808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0317 12:43:32.439625 629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0317 12:43:32.562976 629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0317 12:43:32.676450 629808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0317 12:43:32.940100 629808 kapi.go:107] duration metric: took 1m4.504416534s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
I0317 12:43:33.061618 629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0317 12:43:33.176485 629808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0317 12:43:33.562256 629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0317 12:43:33.677098 629808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0317 12:43:34.062325 629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0317 12:43:34.176823 629808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0317 12:43:34.562284 629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0317 12:43:34.676476 629808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0317 12:43:35.062542 629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0317 12:43:35.176364 629808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0317 12:43:35.562188 629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0317 12:43:35.676402 629808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0317 12:43:36.062530 629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0317 12:43:36.176495 629808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0317 12:43:36.562344 629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0317 12:43:36.676487 629808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0317 12:43:37.062616 629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0317 12:43:37.353440 629808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0317 12:43:37.562281 629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0317 12:43:37.676110 629808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0317 12:43:38.062913 629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0317 12:43:38.175497 629808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0317 12:43:38.562763 629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0317 12:43:38.677016 629808 kapi.go:107] duration metric: took 1m11.504242894s to wait for app.kubernetes.io/name=ingress-nginx ...
I0317 12:43:39.062130 629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0317 12:43:39.561941 629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0317 12:43:40.062866 629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0317 12:43:40.562682 629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0317 12:43:41.062695 629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0317 12:43:41.562016 629808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0317 12:43:42.062827 629808 kapi.go:107] duration metric: took 1m11.503992842s to wait for kubernetes.io/minikube-addons=gcp-auth ...
I0317 12:43:42.064674 629808 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-012915 cluster.
I0317 12:43:42.066050 629808 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
I0317 12:43:42.067355 629808 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
I0317 12:43:42.068758 629808 out.go:177] * Enabled addons: ingress-dns, amd-gpu-device-plugin, cloud-spanner, storage-provisioner, nvidia-device-plugin, storage-provisioner-rancher, inspektor-gadget, metrics-server, yakd, default-storageclass, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
I0317 12:43:42.070069 629808 addons.go:514] duration metric: took 1m22.243173543s for enable addons: enabled=[ingress-dns amd-gpu-device-plugin cloud-spanner storage-provisioner nvidia-device-plugin storage-provisioner-rancher inspektor-gadget metrics-server yakd default-storageclass volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
I0317 12:43:42.070118 629808 start.go:246] waiting for cluster config update ...
I0317 12:43:42.070143 629808 start.go:255] writing updated cluster config ...
I0317 12:43:42.070414 629808 ssh_runner.go:195] Run: rm -f paused
I0317 12:43:42.122322 629808 start.go:600] kubectl: 1.32.3, cluster: 1.32.2 (minor skew: 0)
I0317 12:43:42.124033 629808 out.go:177] * Done! kubectl is now configured to use "addons-012915" cluster and "default" namespace by default
==> CRI-O <==
Mar 17 12:46:58 addons-012915 crio[662]: time="2025-03-17 12:46:58.362144411Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=9d0cfa5c-b107-4eda-887b-5ad0da876a56 name=/runtime.v1.ImageService/ImageFsInfo
Mar 17 12:46:58 addons-012915 crio[662]: time="2025-03-17 12:46:58.363313357Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1742215618363276749,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595374,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=9d0cfa5c-b107-4eda-887b-5ad0da876a56 name=/runtime.v1.ImageService/ImageFsInfo
Mar 17 12:46:58 addons-012915 crio[662]: time="2025-03-17 12:46:58.363998987Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=67aded8c-36de-4229-b3d3-f3383b765b57 name=/runtime.v1.RuntimeService/ListContainers
Mar 17 12:46:58 addons-012915 crio[662]: time="2025-03-17 12:46:58.364119038Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=67aded8c-36de-4229-b3d3-f3383b765b57 name=/runtime.v1.RuntimeService/ListContainers
Mar 17 12:46:58 addons-012915 crio[662]: time="2025-03-17 12:46:58.364452816Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:dd66a51459e1bef3f8408edd3f8f513b15938d306a8eca07b17aa8c7b9e28b71,PodSandboxId:c4de7d0c7eebbdc93b0251dfd2385920126017aca7d4e6cd7cdd7707fe087d23,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:4ff102c5d78d254a6f0da062b3cf39eaf07f01eec0927fd21e219d0af8bc0591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1ff4bb4faebcfb1f7e01144fa9904a570ab9bab88694457855feb6c6bba3fa07,State:CONTAINER_RUNNING,CreatedAt:1742215481306876676,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8de7ed01-2923-4e6d-8d79-73b590e77823,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ff1e96cb1f9da48b54130c0a5b77b5950455c825228b80873ba1bed389be3129,PodSandboxId:44e0a2a5eb268d5c86d610b41c6a58c2054d5b9c14050d5e4efd0e82487f9e66,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1742215428544863639,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 86b0f221-352a-43ab-8627-f3bd097570e7,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c523da30019c3442fb38769fc3ff8bd361afb7328c1b6ae987f0f2ed8fca2e18,PodSandboxId:ec2562250167061bb61a89c84962bf285130b7d07feb2dec99b816f27bc60678,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:62b61c42ec8dd877b85c0aa24c4744ce44d274bc16cc5d2364edfe67964ba55b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ee44bc2368033ba6147d84fb376356de1e40e4778c20dd8b4817bd1636121ddf,State:CONTAINER_RUNNING,CreatedAt:1742215418117594801,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-56d7c84fd4-xdmt9,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 015e05a6-3f15-4b89-be12-3508da6ca614,},Annotations:map[string]string{io.kubernetes.
container.hash: 4e8eee94,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:efdb31bc8b6c1ecfe33294cbdf0d81864ce54c14641c1fcbd231bb9a88adc0f6,PodSandboxId:db1849458e66b95bb50356b7fb92fc3e6f8095b644aca37f4fbb67bed3ded80e,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a62eeff05
ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,State:CONTAINER_EXITED,CreatedAt:1742215400541922291,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-66l7h,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 0dea4d95-0523-4d5b-84cb-9adc16a15c3b,},Annotations:map[string]string{io.kubernetes.container.hash: 3f610496,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:133895556e47e4d79d84bfc1afab7f31237ad7cb5aa5d0d3137bae9a0ec19f48,PodSandboxId:80849abcc2545a296f9515d1f00e3c082666c079ce8027e9c10fe3d2f886236f,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f,Annotations:map[string]string{},UserSpecifiedImage:,
RuntimeHandler:,},ImageRef:a62eeff05ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,State:CONTAINER_EXITED,CreatedAt:1742215400444569254,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-hq84q,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 76f8d5e3-24ae-4b5e-a45c-f01b16d165fd,},Annotations:map[string]string{io.kubernetes.container.hash: fe18a2bf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:40a360856a0a6bf25f96af4f507d594db92c89567721ac9a63b6fed61b702d2c,PodSandboxId:7b3ad027614e622bdd42ddc39f4ad1fb21323dea293f705975466fee1a56f5b5,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[s
tring]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1742215373208789588,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-5pkbv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8713b029-97c0-4a95-a703-886b238a1cf1,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cce580779eef53d01ece27b4bc2ffba2f4807f25438ba0cd881cef251d21b834,PodSandboxId:aeda29e4d26e13a5232196926d3e5d4baca39b959e6d35e5e5b4042a2d2df7fa,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de3
5f17e5d5392b1de8de02356f85dab,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:30dd67412fdea30479de8d5d9bf760870308d24d911c59ea1f1757f04c33cc29,State:CONTAINER_RUNNING,CreatedAt:1742215370723940907,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7fc15951-f543-49f2-aa75-cfec8ee9f60a,},Annotations:map[string]string{io.kubernetes.container.hash: 8778d474,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8be226d7c03469fc7293b21c8dff4b351bcc17daa0d40e8e38e9703a232b644b,PodSandboxId:7ca8740fcb29f6238a08bd5f550930bfe27bd53df6291f8147b14727dd088e19,Metadata:&ContainerMetadata{Name:st
orage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1742215346578805704,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c7f99af7-bb01-4504-ac70-77dc8ab04b3e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a181ba896ebb5402cbb2fde44e9c8995e6b16197f14fb89471cd7da7a8d0afa,PodSandboxId:ea1483bc05de6bb1e3458948160d2dba4613e2f911875da97871ee48a8167bb6,Metadata:&ContainerMetadata{Name:coredns,Attemp
t:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1742215343913365647,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-z7dq4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0a5b10dc-42b3-4a25-9f03-222b3324baf9,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Containe
r{Id:5d9804125479f6a1365b6354df6e0220e2d0d6a2af965300bf6bee19a352c513,PodSandboxId:d90f1092da620e69bbc467dbc17017d26ed7e351f1f460d8cfa2f5fa744c0332,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,State:CONTAINER_RUNNING,CreatedAt:1742215341066312138,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gfpml,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c443023b-cd1a-4c68-95ea-21e945f88e15,},Annotations:map[string]string{io.kubernetes.container.hash: b4fecc5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d73a8dd6794a66be96fa55340cfe
f9c096d211461a6597900775ee7fb31061d,PodSandboxId:aecb92d28c7683fea0d3b0fe6004aa37718e726841435fa321421ef2c28eddf5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,State:CONTAINER_RUNNING,CreatedAt:1742215330428864032,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-012915,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 90b5efd14a8f4c69122921d67caed5e4,},Annotations:map[string]string{io.kubernetes.container.hash: 4c5aaea3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4a2bc102a5d64f2f40e006e80ed3dbb37f3996b3e6daaa
e677a0e5bbbdf293d0,PodSandboxId:a86fb64e69f35fb457529972cca93e35e9b7ef8abb2ce0cd235207aa6f60cd21,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,State:CONTAINER_RUNNING,CreatedAt:1742215330422210457,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-012915,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 33eed52998b2ef38f7221edf88370515,},Annotations:map[string]string{io.kubernetes.container.hash: 51692d3d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c159463a3ceb104234df8916a5f45b4790ff
840cb429031de6dac13d8d11ecf8,PodSandboxId:27f65aa74ce37bb1939e121d7278958a4870485e53a810e0f75a54f492f2813e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,State:CONTAINER_RUNNING,CreatedAt:1742215330395620755,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-012915,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4d0e1e934e5a9e0fca80e2ef7acaa680,},Annotations:map[string]string{io.kubernetes.container.hash: 7745040f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e60f82ce8690170f47fa0a774c43f343703f1c09f94e277299d56
ced83465e0f,PodSandboxId:3adc1ca1797b27a09a788f6cb790d5a1e36fcb67045a86885668b83cd8b84cf0,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1742215330374223398,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-012915,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 653939de3e18bd530b1f5fa403ec7f64,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=67aded8c-36de-4229-b3d3-f3383b765b57 name=/runtime.v1.RuntimeServ
ice/ListContainers
Mar 17 12:46:58 addons-012915 crio[662]: time="2025-03-17 12:46:58.368541068Z" level=debug msg="GET https://registry-1.docker.io/v2/kicbase/echo-server/manifests/1.0" file="docker/docker_client.go:631"
Mar 17 12:46:58 addons-012915 crio[662]: time="2025-03-17 12:46:58.395523128Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=91f0a84f-4c4e-4812-a12e-81129b8abc78 name=/runtime.v1.RuntimeService/Version
Mar 17 12:46:58 addons-012915 crio[662]: time="2025-03-17 12:46:58.395607042Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=91f0a84f-4c4e-4812-a12e-81129b8abc78 name=/runtime.v1.RuntimeService/Version
Mar 17 12:46:58 addons-012915 crio[662]: time="2025-03-17 12:46:58.396709685Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=ed213f12-fdd4-4ca4-b150-6a0450a1b009 name=/runtime.v1.ImageService/ImageFsInfo
Mar 17 12:46:58 addons-012915 crio[662]: time="2025-03-17 12:46:58.397832759Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1742215618397805395,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595374,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ed213f12-fdd4-4ca4-b150-6a0450a1b009 name=/runtime.v1.ImageService/ImageFsInfo
Mar 17 12:46:58 addons-012915 crio[662]: time="2025-03-17 12:46:58.398485895Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=9c3afea3-03b7-4104-a90b-d9bc8a1d3d43 name=/runtime.v1.RuntimeService/ListContainers
Mar 17 12:46:58 addons-012915 crio[662]: time="2025-03-17 12:46:58.398542393Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=9c3afea3-03b7-4104-a90b-d9bc8a1d3d43 name=/runtime.v1.RuntimeService/ListContainers
Mar 17 12:46:58 addons-012915 crio[662]: time="2025-03-17 12:46:58.398816277Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:dd66a51459e1bef3f8408edd3f8f513b15938d306a8eca07b17aa8c7b9e28b71,PodSandboxId:c4de7d0c7eebbdc93b0251dfd2385920126017aca7d4e6cd7cdd7707fe087d23,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:4ff102c5d78d254a6f0da062b3cf39eaf07f01eec0927fd21e219d0af8bc0591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1ff4bb4faebcfb1f7e01144fa9904a570ab9bab88694457855feb6c6bba3fa07,State:CONTAINER_RUNNING,CreatedAt:1742215481306876676,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8de7ed01-2923-4e6d-8d79-73b590e77823,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ff1e96cb1f9da48b54130c0a5b77b5950455c825228b80873ba1bed389be3129,PodSandboxId:44e0a2a5eb268d5c86d610b41c6a58c2054d5b9c14050d5e4efd0e82487f9e66,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1742215428544863639,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 86b0f221-352a-43ab-8627-f3bd097570e7,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c523da30019c3442fb38769fc3ff8bd361afb7328c1b6ae987f0f2ed8fca2e18,PodSandboxId:ec2562250167061bb61a89c84962bf285130b7d07feb2dec99b816f27bc60678,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:62b61c42ec8dd877b85c0aa24c4744ce44d274bc16cc5d2364edfe67964ba55b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ee44bc2368033ba6147d84fb376356de1e40e4778c20dd8b4817bd1636121ddf,State:CONTAINER_RUNNING,CreatedAt:1742215418117594801,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-56d7c84fd4-xdmt9,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 015e05a6-3f15-4b89-be12-3508da6ca614,},Annotations:map[string]string{io.kubernetes.
container.hash: 4e8eee94,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:efdb31bc8b6c1ecfe33294cbdf0d81864ce54c14641c1fcbd231bb9a88adc0f6,PodSandboxId:db1849458e66b95bb50356b7fb92fc3e6f8095b644aca37f4fbb67bed3ded80e,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a62eeff05
ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,State:CONTAINER_EXITED,CreatedAt:1742215400541922291,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-66l7h,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 0dea4d95-0523-4d5b-84cb-9adc16a15c3b,},Annotations:map[string]string{io.kubernetes.container.hash: 3f610496,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:133895556e47e4d79d84bfc1afab7f31237ad7cb5aa5d0d3137bae9a0ec19f48,PodSandboxId:80849abcc2545a296f9515d1f00e3c082666c079ce8027e9c10fe3d2f886236f,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f,Annotations:map[string]string{},UserSpecifiedImage:,
RuntimeHandler:,},ImageRef:a62eeff05ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,State:CONTAINER_EXITED,CreatedAt:1742215400444569254,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-hq84q,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 76f8d5e3-24ae-4b5e-a45c-f01b16d165fd,},Annotations:map[string]string{io.kubernetes.container.hash: fe18a2bf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:40a360856a0a6bf25f96af4f507d594db92c89567721ac9a63b6fed61b702d2c,PodSandboxId:7b3ad027614e622bdd42ddc39f4ad1fb21323dea293f705975466fee1a56f5b5,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[s
tring]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1742215373208789588,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-5pkbv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8713b029-97c0-4a95-a703-886b238a1cf1,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cce580779eef53d01ece27b4bc2ffba2f4807f25438ba0cd881cef251d21b834,PodSandboxId:aeda29e4d26e13a5232196926d3e5d4baca39b959e6d35e5e5b4042a2d2df7fa,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de3
5f17e5d5392b1de8de02356f85dab,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:30dd67412fdea30479de8d5d9bf760870308d24d911c59ea1f1757f04c33cc29,State:CONTAINER_RUNNING,CreatedAt:1742215370723940907,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7fc15951-f543-49f2-aa75-cfec8ee9f60a,},Annotations:map[string]string{io.kubernetes.container.hash: 8778d474,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8be226d7c03469fc7293b21c8dff4b351bcc17daa0d40e8e38e9703a232b644b,PodSandboxId:7ca8740fcb29f6238a08bd5f550930bfe27bd53df6291f8147b14727dd088e19,Metadata:&ContainerMetadata{Name:st
orage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1742215346578805704,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c7f99af7-bb01-4504-ac70-77dc8ab04b3e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a181ba896ebb5402cbb2fde44e9c8995e6b16197f14fb89471cd7da7a8d0afa,PodSandboxId:ea1483bc05de6bb1e3458948160d2dba4613e2f911875da97871ee48a8167bb6,Metadata:&ContainerMetadata{Name:coredns,Attemp
t:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1742215343913365647,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-z7dq4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0a5b10dc-42b3-4a25-9f03-222b3324baf9,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Containe
r{Id:5d9804125479f6a1365b6354df6e0220e2d0d6a2af965300bf6bee19a352c513,PodSandboxId:d90f1092da620e69bbc467dbc17017d26ed7e351f1f460d8cfa2f5fa744c0332,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,State:CONTAINER_RUNNING,CreatedAt:1742215341066312138,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gfpml,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c443023b-cd1a-4c68-95ea-21e945f88e15,},Annotations:map[string]string{io.kubernetes.container.hash: b4fecc5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d73a8dd6794a66be96fa55340cfe
f9c096d211461a6597900775ee7fb31061d,PodSandboxId:aecb92d28c7683fea0d3b0fe6004aa37718e726841435fa321421ef2c28eddf5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,State:CONTAINER_RUNNING,CreatedAt:1742215330428864032,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-012915,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 90b5efd14a8f4c69122921d67caed5e4,},Annotations:map[string]string{io.kubernetes.container.hash: 4c5aaea3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4a2bc102a5d64f2f40e006e80ed3dbb37f3996b3e6daaa
e677a0e5bbbdf293d0,PodSandboxId:a86fb64e69f35fb457529972cca93e35e9b7ef8abb2ce0cd235207aa6f60cd21,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,State:CONTAINER_RUNNING,CreatedAt:1742215330422210457,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-012915,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 33eed52998b2ef38f7221edf88370515,},Annotations:map[string]string{io.kubernetes.container.hash: 51692d3d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c159463a3ceb104234df8916a5f45b4790ff
840cb429031de6dac13d8d11ecf8,PodSandboxId:27f65aa74ce37bb1939e121d7278958a4870485e53a810e0f75a54f492f2813e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,State:CONTAINER_RUNNING,CreatedAt:1742215330395620755,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-012915,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4d0e1e934e5a9e0fca80e2ef7acaa680,},Annotations:map[string]string{io.kubernetes.container.hash: 7745040f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e60f82ce8690170f47fa0a774c43f343703f1c09f94e277299d56
ced83465e0f,PodSandboxId:3adc1ca1797b27a09a788f6cb790d5a1e36fcb67045a86885668b83cd8b84cf0,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1742215330374223398,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-012915,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 653939de3e18bd530b1f5fa403ec7f64,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=9c3afea3-03b7-4104-a90b-d9bc8a1d3d43 name=/runtime.v1.RuntimeServ
ice/ListContainers
Mar 17 12:46:58 addons-012915 crio[662]: time="2025-03-17 12:46:58.412710682Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="otel-collector/interceptors.go:62" id=7a1f9c4b-c23e-405f-b3d5-bcd661f52920 name=/runtime.v1.RuntimeService/ListPodSandbox
Mar 17 12:46:58 addons-012915 crio[662]: time="2025-03-17 12:46:58.413005395Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:dabc9fe66eaacd72c8528f9ee57ef00e1b2004b66cce00aa5bcd41ded63ef506,Metadata:&PodSandboxMetadata{Name:hello-world-app-7d9564db4-8dhl4,Uid:f8a61344-cc0c-45b7-b157-e7662713cb83,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1742215617722476540,Labels:map[string]string{app: hello-world-app,io.kubernetes.container.name: POD,io.kubernetes.pod.name: hello-world-app-7d9564db4-8dhl4,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f8a61344-cc0c-45b7-b157-e7662713cb83,pod-template-hash: 7d9564db4,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-03-17T12:46:57.110188553Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:c4de7d0c7eebbdc93b0251dfd2385920126017aca7d4e6cd7cdd7707fe087d23,Metadata:&PodSandboxMetadata{Name:nginx,Uid:8de7ed01-2923-4e6d-8d79-73b590e77823,Namespace:default,Attempt:0,},St
ate:SANDBOX_READY,CreatedAt:1742215477493872258,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8de7ed01-2923-4e6d-8d79-73b590e77823,run: nginx,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-03-17T12:44:37.180495347Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:44e0a2a5eb268d5c86d610b41c6a58c2054d5b9c14050d5e4efd0e82487f9e66,Metadata:&PodSandboxMetadata{Name:busybox,Uid:86b0f221-352a-43ab-8627-f3bd097570e7,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1742215424693591062,Labels:map[string]string{integration-test: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 86b0f221-352a-43ab-8627-f3bd097570e7,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-03-17T12:43:44.385756605Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:ec2562250167061bb61a8
9c84962bf285130b7d07feb2dec99b816f27bc60678,Metadata:&PodSandboxMetadata{Name:ingress-nginx-controller-56d7c84fd4-xdmt9,Uid:015e05a6-3f15-4b89-be12-3508da6ca614,Namespace:ingress-nginx,Attempt:0,},State:SANDBOX_READY,CreatedAt:1742215411294280128,Labels:map[string]string{app.kubernetes.io/component: controller,app.kubernetes.io/instance: ingress-nginx,app.kubernetes.io/name: ingress-nginx,gcp-auth-skip-secret: true,io.kubernetes.container.name: POD,io.kubernetes.pod.name: ingress-nginx-controller-56d7c84fd4-xdmt9,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 015e05a6-3f15-4b89-be12-3508da6ca614,pod-template-hash: 56d7c84fd4,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-03-17T12:42:27.084920619Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:db1849458e66b95bb50356b7fb92fc3e6f8095b644aca37f4fbb67bed3ded80e,Metadata:&PodSandboxMetadata{Name:ingress-nginx-admission-patch-66l7h,Uid:0dea4d95-0523-4d5b-84cb-9adc16a15c3b,Namespace:ingress-nginx,Attempt:0,},St
ate:SANDBOX_NOTREADY,CreatedAt:1742215347458948652,Labels:map[string]string{app.kubernetes.io/component: admission-webhook,app.kubernetes.io/instance: ingress-nginx,app.kubernetes.io/name: ingress-nginx,batch.kubernetes.io/controller-uid: 9aad88ca-42bc-461e-b0db-6a3f4c169332,batch.kubernetes.io/job-name: ingress-nginx-admission-patch,controller-uid: 9aad88ca-42bc-461e-b0db-6a3f4c169332,io.kubernetes.container.name: POD,io.kubernetes.pod.name: ingress-nginx-admission-patch-66l7h,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 0dea4d95-0523-4d5b-84cb-9adc16a15c3b,job-name: ingress-nginx-admission-patch,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-03-17T12:42:27.149942171Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:80849abcc2545a296f9515d1f00e3c082666c079ce8027e9c10fe3d2f886236f,Metadata:&PodSandboxMetadata{Name:ingress-nginx-admission-create-hq84q,Uid:76f8d5e3-24ae-4b5e-a45c-f01b16d165fd,Namespace:ingress-nginx,Attempt:0,},State:SANDBOX_NOTREADY,Crea
tedAt:1742215347407850820,Labels:map[string]string{app.kubernetes.io/component: admission-webhook,app.kubernetes.io/instance: ingress-nginx,app.kubernetes.io/name: ingress-nginx,batch.kubernetes.io/controller-uid: 48dd1ef2-5a6e-4081-92fb-51cd7e6a831a,batch.kubernetes.io/job-name: ingress-nginx-admission-create,controller-uid: 48dd1ef2-5a6e-4081-92fb-51cd7e6a831a,io.kubernetes.container.name: POD,io.kubernetes.pod.name: ingress-nginx-admission-create-hq84q,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 76f8d5e3-24ae-4b5e-a45c-f01b16d165fd,job-name: ingress-nginx-admission-create,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-03-17T12:42:27.094854704Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:7ca8740fcb29f6238a08bd5f550930bfe27bd53df6291f8147b14727dd088e19,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:c7f99af7-bb01-4504-ac70-77dc8ab04b3e,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1742215346230711057,Labels:map[string]
string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c7f99af7-bb01-4504-ac70-77dc8ab04b3e,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io
/config.seen: 2025-03-17T12:42:25.594747161Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:aeda29e4d26e13a5232196926d3e5d4baca39b959e6d35e5e5b4042a2d2df7fa,Metadata:&PodSandboxMetadata{Name:kube-ingress-dns-minikube,Uid:7fc15951-f543-49f2-aa75-cfec8ee9f60a,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1742215344010360936,Labels:map[string]string{app: minikube-ingress-dns,app.kubernetes.io/part-of: kube-system,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7fc15951-f543-49f2-aa75-cfec8ee9f60a,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"app\":\"minikube-ingress-dns\",\"app.kubernetes.io/part-of\":\"kube-system\"},\"name\":\"kube-ingress-dns-minikube\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"env\":[{\"name\":\"DNS_PORT\",\"value\":
\"53\"},{\"name\":\"POD_IP\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"status.podIP\"}}}],\"image\":\"gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3@sha256:4211a1de532376c881851542238121b26792225faa36a7b02dccad88fd05797c\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"minikube-ingress-dns\",\"ports\":[{\"containerPort\":53,\"protocol\":\"UDP\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"minikube-ingress-dns\"}}\n,kubernetes.io/config.seen: 2025-03-17T12:42:23.400117380Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:7b3ad027614e622bdd42ddc39f4ad1fb21323dea293f705975466fee1a56f5b5,Metadata:&PodSandboxMetadata{Name:amd-gpu-device-plugin-5pkbv,Uid:8713b029-97c0-4a95-a703-886b238a1cf1,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1742215341710519445,Labels:map[string]string{controller-revision-hash: 578b4c597,io.kubernetes.container.name: POD,io.kubernetes.pod.name: amd-gpu-device-plugin-5pkbv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8713b02
9-97c0-4a95-a703-886b238a1cf1,k8s-app: amd-gpu-device-plugin,name: amd-gpu-device-plugin,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-03-17T12:42:21.397098870Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:ea1483bc05de6bb1e3458948160d2dba4613e2f911875da97871ee48a8167bb6,Metadata:&PodSandboxMetadata{Name:coredns-668d6bf9bc-z7dq4,Uid:0a5b10dc-42b3-4a25-9f03-222b3324baf9,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1742215340653695598,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-668d6bf9bc-z7dq4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0a5b10dc-42b3-4a25-9f03-222b3324baf9,k8s-app: kube-dns,pod-template-hash: 668d6bf9bc,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-03-17T12:42:20.344782884Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:d90f1092da620e69bbc467dbc17017d26ed7e351f1f460d8cfa2f5fa744c0332,Metadata:&PodSandboxM
etadata{Name:kube-proxy-gfpml,Uid:c443023b-cd1a-4c68-95ea-21e945f88e15,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1742215340584555506,Labels:map[string]string{controller-revision-hash: 7bb84c4984,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-gfpml,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c443023b-cd1a-4c68-95ea-21e945f88e15,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-03-17T12:42:20.248837650Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:27f65aa74ce37bb1939e121d7278958a4870485e53a810e0f75a54f492f2813e,Metadata:&PodSandboxMetadata{Name:kube-apiserver-addons-012915,Uid:4d0e1e934e5a9e0fca80e2ef7acaa680,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1742215330255923907,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-addons-012915,io.kubernetes.pod.namespace: kube-syste
m,io.kubernetes.pod.uid: 4d0e1e934e5a9e0fca80e2ef7acaa680,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.84:8443,kubernetes.io/config.hash: 4d0e1e934e5a9e0fca80e2ef7acaa680,kubernetes.io/config.seen: 2025-03-17T12:42:09.578681648Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:a86fb64e69f35fb457529972cca93e35e9b7ef8abb2ce0cd235207aa6f60cd21,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-addons-012915,Uid:33eed52998b2ef38f7221edf88370515,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1742215330241210865,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-addons-012915,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 33eed52998b2ef38f7221edf88370515,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 33eed52998b2ef38f7221edf88370515,kubernetes.io/config
.seen: 2025-03-17T12:42:09.578682800Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:3adc1ca1797b27a09a788f6cb790d5a1e36fcb67045a86885668b83cd8b84cf0,Metadata:&PodSandboxMetadata{Name:etcd-addons-012915,Uid:653939de3e18bd530b1f5fa403ec7f64,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1742215330240731421,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-addons-012915,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 653939de3e18bd530b1f5fa403ec7f64,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.84:2379,kubernetes.io/config.hash: 653939de3e18bd530b1f5fa403ec7f64,kubernetes.io/config.seen: 2025-03-17T12:42:09.578680165Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:aecb92d28c7683fea0d3b0fe6004aa37718e726841435fa321421ef2c28eddf5,Metadata:&PodSandboxMetadata{Name:kube-scheduler-addons-012915,Uid:90b5efd14a8f4c6
9122921d67caed5e4,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1742215330239597446,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-addons-012915,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 90b5efd14a8f4c69122921d67caed5e4,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 90b5efd14a8f4c69122921d67caed5e4,kubernetes.io/config.seen: 2025-03-17T12:42:09.578676051Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=7a1f9c4b-c23e-405f-b3d5-bcd661f52920 name=/runtime.v1.RuntimeService/ListPodSandbox
Mar 17 12:46:58 addons-012915 crio[662]: time="2025-03-17 12:46:58.414389239Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=188b0c0f-72f6-4a09-98d1-00f3a1728652 name=/runtime.v1.RuntimeService/ListContainers
Mar 17 12:46:58 addons-012915 crio[662]: time="2025-03-17 12:46:58.414443780Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=188b0c0f-72f6-4a09-98d1-00f3a1728652 name=/runtime.v1.RuntimeService/ListContainers
Mar 17 12:46:58 addons-012915 crio[662]: time="2025-03-17 12:46:58.414720131Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:dd66a51459e1bef3f8408edd3f8f513b15938d306a8eca07b17aa8c7b9e28b71,PodSandboxId:c4de7d0c7eebbdc93b0251dfd2385920126017aca7d4e6cd7cdd7707fe087d23,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:4ff102c5d78d254a6f0da062b3cf39eaf07f01eec0927fd21e219d0af8bc0591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1ff4bb4faebcfb1f7e01144fa9904a570ab9bab88694457855feb6c6bba3fa07,State:CONTAINER_RUNNING,CreatedAt:1742215481306876676,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8de7ed01-2923-4e6d-8d79-73b590e77823,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ff1e96cb1f9da48b54130c0a5b77b5950455c825228b80873ba1bed389be3129,PodSandboxId:44e0a2a5eb268d5c86d610b41c6a58c2054d5b9c14050d5e4efd0e82487f9e66,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1742215428544863639,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 86b0f221-352a-43ab-8627-f3bd097570e7,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c523da30019c3442fb38769fc3ff8bd361afb7328c1b6ae987f0f2ed8fca2e18,PodSandboxId:ec2562250167061bb61a89c84962bf285130b7d07feb2dec99b816f27bc60678,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:62b61c42ec8dd877b85c0aa24c4744ce44d274bc16cc5d2364edfe67964ba55b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ee44bc2368033ba6147d84fb376356de1e40e4778c20dd8b4817bd1636121ddf,State:CONTAINER_RUNNING,CreatedAt:1742215418117594801,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-56d7c84fd4-xdmt9,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 015e05a6-3f15-4b89-be12-3508da6ca614,},Annotations:map[string]string{io.kubernetes.
container.hash: 4e8eee94,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:efdb31bc8b6c1ecfe33294cbdf0d81864ce54c14641c1fcbd231bb9a88adc0f6,PodSandboxId:db1849458e66b95bb50356b7fb92fc3e6f8095b644aca37f4fbb67bed3ded80e,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a62eeff05
ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,State:CONTAINER_EXITED,CreatedAt:1742215400541922291,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-66l7h,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 0dea4d95-0523-4d5b-84cb-9adc16a15c3b,},Annotations:map[string]string{io.kubernetes.container.hash: 3f610496,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:133895556e47e4d79d84bfc1afab7f31237ad7cb5aa5d0d3137bae9a0ec19f48,PodSandboxId:80849abcc2545a296f9515d1f00e3c082666c079ce8027e9c10fe3d2f886236f,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f,Annotations:map[string]string{},UserSpecifiedImage:,
RuntimeHandler:,},ImageRef:a62eeff05ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,State:CONTAINER_EXITED,CreatedAt:1742215400444569254,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-hq84q,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 76f8d5e3-24ae-4b5e-a45c-f01b16d165fd,},Annotations:map[string]string{io.kubernetes.container.hash: fe18a2bf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:40a360856a0a6bf25f96af4f507d594db92c89567721ac9a63b6fed61b702d2c,PodSandboxId:7b3ad027614e622bdd42ddc39f4ad1fb21323dea293f705975466fee1a56f5b5,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[s
tring]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1742215373208789588,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-5pkbv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8713b029-97c0-4a95-a703-886b238a1cf1,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cce580779eef53d01ece27b4bc2ffba2f4807f25438ba0cd881cef251d21b834,PodSandboxId:aeda29e4d26e13a5232196926d3e5d4baca39b959e6d35e5e5b4042a2d2df7fa,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de3
5f17e5d5392b1de8de02356f85dab,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:30dd67412fdea30479de8d5d9bf760870308d24d911c59ea1f1757f04c33cc29,State:CONTAINER_RUNNING,CreatedAt:1742215370723940907,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7fc15951-f543-49f2-aa75-cfec8ee9f60a,},Annotations:map[string]string{io.kubernetes.container.hash: 8778d474,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8be226d7c03469fc7293b21c8dff4b351bcc17daa0d40e8e38e9703a232b644b,PodSandboxId:7ca8740fcb29f6238a08bd5f550930bfe27bd53df6291f8147b14727dd088e19,Metadata:&ContainerMetadata{Name:st
orage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1742215346578805704,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c7f99af7-bb01-4504-ac70-77dc8ab04b3e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a181ba896ebb5402cbb2fde44e9c8995e6b16197f14fb89471cd7da7a8d0afa,PodSandboxId:ea1483bc05de6bb1e3458948160d2dba4613e2f911875da97871ee48a8167bb6,Metadata:&ContainerMetadata{Name:coredns,Attemp
t:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1742215343913365647,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-z7dq4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0a5b10dc-42b3-4a25-9f03-222b3324baf9,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Containe
r{Id:5d9804125479f6a1365b6354df6e0220e2d0d6a2af965300bf6bee19a352c513,PodSandboxId:d90f1092da620e69bbc467dbc17017d26ed7e351f1f460d8cfa2f5fa744c0332,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,State:CONTAINER_RUNNING,CreatedAt:1742215341066312138,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gfpml,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c443023b-cd1a-4c68-95ea-21e945f88e15,},Annotations:map[string]string{io.kubernetes.container.hash: b4fecc5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d73a8dd6794a66be96fa55340cfe
f9c096d211461a6597900775ee7fb31061d,PodSandboxId:aecb92d28c7683fea0d3b0fe6004aa37718e726841435fa321421ef2c28eddf5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,State:CONTAINER_RUNNING,CreatedAt:1742215330428864032,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-012915,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 90b5efd14a8f4c69122921d67caed5e4,},Annotations:map[string]string{io.kubernetes.container.hash: 4c5aaea3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4a2bc102a5d64f2f40e006e80ed3dbb37f3996b3e6daaa
e677a0e5bbbdf293d0,PodSandboxId:a86fb64e69f35fb457529972cca93e35e9b7ef8abb2ce0cd235207aa6f60cd21,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,State:CONTAINER_RUNNING,CreatedAt:1742215330422210457,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-012915,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 33eed52998b2ef38f7221edf88370515,},Annotations:map[string]string{io.kubernetes.container.hash: 51692d3d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c159463a3ceb104234df8916a5f45b4790ff
840cb429031de6dac13d8d11ecf8,PodSandboxId:27f65aa74ce37bb1939e121d7278958a4870485e53a810e0f75a54f492f2813e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,State:CONTAINER_RUNNING,CreatedAt:1742215330395620755,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-012915,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4d0e1e934e5a9e0fca80e2ef7acaa680,},Annotations:map[string]string{io.kubernetes.container.hash: 7745040f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e60f82ce8690170f47fa0a774c43f343703f1c09f94e277299d56
ced83465e0f,PodSandboxId:3adc1ca1797b27a09a788f6cb790d5a1e36fcb67045a86885668b83cd8b84cf0,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1742215330374223398,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-012915,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 653939de3e18bd530b1f5fa403ec7f64,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=188b0c0f-72f6-4a09-98d1-00f3a1728652 name=/runtime.v1.RuntimeServ
ice/ListContainers
Mar 17 12:46:58 addons-012915 crio[662]: time="2025-03-17 12:46:58.415828110Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:&PodSandboxFilter{Id:,State:nil,LabelSelector:map[string]string{io.kubernetes.pod.uid: f8a61344-cc0c-45b7-b157-e7662713cb83,},},}" file="otel-collector/interceptors.go:62" id=25c28bfd-2be4-4a01-a8e5-4e5f7308e372 name=/runtime.v1.RuntimeService/ListPodSandbox
Mar 17 12:46:58 addons-012915 crio[662]: time="2025-03-17 12:46:58.415909100Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:dabc9fe66eaacd72c8528f9ee57ef00e1b2004b66cce00aa5bcd41ded63ef506,Metadata:&PodSandboxMetadata{Name:hello-world-app-7d9564db4-8dhl4,Uid:f8a61344-cc0c-45b7-b157-e7662713cb83,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1742215617722476540,Labels:map[string]string{app: hello-world-app,io.kubernetes.container.name: POD,io.kubernetes.pod.name: hello-world-app-7d9564db4-8dhl4,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f8a61344-cc0c-45b7-b157-e7662713cb83,pod-template-hash: 7d9564db4,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-03-17T12:46:57.110188553Z,kubernetes.io/config.source: api,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=25c28bfd-2be4-4a01-a8e5-4e5f7308e372 name=/runtime.v1.RuntimeService/ListPodSandbox
Mar 17 12:46:58 addons-012915 crio[662]: time="2025-03-17 12:46:58.416578030Z" level=debug msg="Request: &PodSandboxStatusRequest{PodSandboxId:dabc9fe66eaacd72c8528f9ee57ef00e1b2004b66cce00aa5bcd41ded63ef506,Verbose:false,}" file="otel-collector/interceptors.go:62" id=114e7269-3987-4d7b-9e20-2618acd736be name=/runtime.v1.RuntimeService/PodSandboxStatus
Mar 17 12:46:58 addons-012915 crio[662]: time="2025-03-17 12:46:58.416668184Z" level=debug msg="Response: &PodSandboxStatusResponse{Status:&PodSandboxStatus{Id:dabc9fe66eaacd72c8528f9ee57ef00e1b2004b66cce00aa5bcd41ded63ef506,Metadata:&PodSandboxMetadata{Name:hello-world-app-7d9564db4-8dhl4,Uid:f8a61344-cc0c-45b7-b157-e7662713cb83,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1742215617722476540,Network:&PodSandboxNetworkStatus{Ip:10.244.0.33,AdditionalIps:[]*PodIP{},},Linux:&LinuxPodSandboxStatus{Namespaces:&Namespace{Options:&NamespaceOption{Network:POD,Pid:CONTAINER,Ipc:POD,TargetId:,UsernsOptions:nil,},},},Labels:map[string]string{app: hello-world-app,io.kubernetes.container.name: POD,io.kubernetes.pod.name: hello-world-app-7d9564db4-8dhl4,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f8a61344-cc0c-45b7-b157-e7662713cb83,pod-template-hash: 7d9564db4,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-03-17T12:46:57.110188553Z,kubernetes.io/config.source: api
,},RuntimeHandler:,},Info:map[string]string{},ContainersStatuses:[]*ContainerStatus{},Timestamp:0,}" file="otel-collector/interceptors.go:74" id=114e7269-3987-4d7b-9e20-2618acd736be name=/runtime.v1.RuntimeService/PodSandboxStatus
Mar 17 12:46:58 addons-012915 crio[662]: time="2025-03-17 12:46:58.416973924Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{io.kubernetes.pod.uid: f8a61344-cc0c-45b7-b157-e7662713cb83,},},}" file="otel-collector/interceptors.go:62" id=bce42661-d49d-4a36-b7f0-157e51d4c38b name=/runtime.v1.RuntimeService/ListContainers
Mar 17 12:46:58 addons-012915 crio[662]: time="2025-03-17 12:46:58.417014110Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=bce42661-d49d-4a36-b7f0-157e51d4c38b name=/runtime.v1.RuntimeService/ListContainers
Mar 17 12:46:58 addons-012915 crio[662]: time="2025-03-17 12:46:58.417048335Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=bce42661-d49d-4a36-b7f0-157e51d4c38b name=/runtime.v1.RuntimeService/ListContainers
==> container status <==
CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID POD
dd66a51459e1b docker.io/library/nginx@sha256:4ff102c5d78d254a6f0da062b3cf39eaf07f01eec0927fd21e219d0af8bc0591 2 minutes ago Running nginx 0 c4de7d0c7eebb nginx
ff1e96cb1f9da gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e 3 minutes ago Running busybox 0 44e0a2a5eb268 busybox
c523da30019c3 registry.k8s.io/ingress-nginx/controller@sha256:62b61c42ec8dd877b85c0aa24c4744ce44d274bc16cc5d2364edfe67964ba55b 3 minutes ago Running controller 0 ec25622501670 ingress-nginx-controller-56d7c84fd4-xdmt9
efdb31bc8b6c1 registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f 3 minutes ago Exited patch 0 db1849458e66b ingress-nginx-admission-patch-66l7h
133895556e47e registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f 3 minutes ago Exited create 0 80849abcc2545 ingress-nginx-admission-create-hq84q
40a360856a0a6 docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f 4 minutes ago Running amd-gpu-device-plugin 0 7b3ad027614e6 amd-gpu-device-plugin-5pkbv
cce580779eef5 gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab 4 minutes ago Running minikube-ingress-dns 0 aeda29e4d26e1 kube-ingress-dns-minikube
8be226d7c0346 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562 4 minutes ago Running storage-provisioner 0 7ca8740fcb29f storage-provisioner
7a181ba896ebb c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6 4 minutes ago Running coredns 0 ea1483bc05de6 coredns-668d6bf9bc-z7dq4
5d9804125479f f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5 4 minutes ago Running kube-proxy 0 d90f1092da620 kube-proxy-gfpml
1d73a8dd6794a d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d 4 minutes ago Running kube-scheduler 0 aecb92d28c768 kube-scheduler-addons-012915
4a2bc102a5d64 b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389 4 minutes ago Running kube-controller-manager 0 a86fb64e69f35 kube-controller-manager-addons-012915
c159463a3ceb1 85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef 4 minutes ago Running kube-apiserver 0 27f65aa74ce37 kube-apiserver-addons-012915
e60f82ce86901 a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc 4 minutes ago Running etcd 0 3adc1ca1797b2 etcd-addons-012915
==> coredns [7a181ba896ebb5402cbb2fde44e9c8995e6b16197f14fb89471cd7da7a8d0afa] <==
[INFO] 10.244.0.8:49913 - 3827 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 109 false 1232" NXDOMAIN qr,aa,rd 179 0.000102878s
[INFO] 10.244.0.8:49913 - 6117 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000098609s
[INFO] 10.244.0.8:49913 - 28095 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000073695s
[INFO] 10.244.0.8:49913 - 55591 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000097s
[INFO] 10.244.0.8:49913 - 50510 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000065959s
[INFO] 10.244.0.8:49913 - 21083 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000088645s
[INFO] 10.244.0.8:49913 - 12222 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000077001s
[INFO] 10.244.0.8:48491 - 32076 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000131036s
[INFO] 10.244.0.8:48491 - 31797 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.00012079s
[INFO] 10.244.0.8:49200 - 45771 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000121235s
[INFO] 10.244.0.8:49200 - 45479 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.00007323s
[INFO] 10.244.0.8:54512 - 37816 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000108277s
[INFO] 10.244.0.8:54512 - 37552 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000130997s
[INFO] 10.244.0.8:38665 - 16125 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000101873s
[INFO] 10.244.0.8:38665 - 15671 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000075688s
[INFO] 10.244.0.23:50713 - 16053 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000304035s
[INFO] 10.244.0.23:52398 - 18250 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000114522s
[INFO] 10.244.0.23:49369 - 29459 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000091164s
[INFO] 10.244.0.23:46512 - 1811 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000135319s
[INFO] 10.244.0.23:59638 - 27790 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000066899s
[INFO] 10.244.0.23:59548 - 31923 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000082871s
[INFO] 10.244.0.23:36786 - 25199 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 382 0.002887193s
[INFO] 10.244.0.23:40623 - 14426 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.004900596s
[INFO] 10.244.0.27:42367 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000241795s
[INFO] 10.244.0.27:34940 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000110771s
==> describe nodes <==
Name: addons-012915
Roles: control-plane
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/arch=amd64
kubernetes.io/hostname=addons-012915
kubernetes.io/os=linux
minikube.k8s.io/commit=5a6f3d20e78a9ae03fc65e3f2e727d0ae0107b3c
minikube.k8s.io/name=addons-012915
minikube.k8s.io/primary=true
minikube.k8s.io/updated_at=2025_03_17T12_42_16_0700
minikube.k8s.io/version=v1.35.0
node-role.kubernetes.io/control-plane=
node.kubernetes.io/exclude-from-external-load-balancers=
topology.hostpath.csi/node=addons-012915
Annotations: kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Mon, 17 Mar 2025 12:42:12 +0000
Taints: <none>
Unschedulable: false
Lease:
HolderIdentity: addons-012915
AcquireTime: <unset>
RenewTime: Mon, 17 Mar 2025 12:46:51 +0000
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
MemoryPressure False Mon, 17 Mar 2025 12:44:49 +0000 Mon, 17 Mar 2025 12:42:11 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Mon, 17 Mar 2025 12:44:49 +0000 Mon, 17 Mar 2025 12:42:11 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Mon, 17 Mar 2025 12:44:49 +0000 Mon, 17 Mar 2025 12:42:11 +0000 KubeletHasSufficientPID kubelet has sufficient PID available
Ready True Mon, 17 Mar 2025 12:44:49 +0000 Mon, 17 Mar 2025 12:42:16 +0000 KubeletReady kubelet is posting ready status
Addresses:
InternalIP: 192.168.39.84
Hostname: addons-012915
Capacity:
cpu: 2
ephemeral-storage: 17734596Ki
hugepages-2Mi: 0
memory: 3912780Ki
pods: 110
Allocatable:
cpu: 2
ephemeral-storage: 17734596Ki
hugepages-2Mi: 0
memory: 3912780Ki
pods: 110
System Info:
Machine ID: fa9b967ac9504025a9ee76c4341b2f70
System UUID: fa9b967a-c950-4025-a9ee-76c4341b2f70
Boot ID: 06cacc75-df46-43ee-b88f-6a36992d8647
Kernel Version: 5.10.207
OS Image: Buildroot 2023.02.9
Operating System: linux
Architecture: amd64
Container Runtime Version: cri-o://1.29.1
Kubelet Version: v1.32.2
Kube-Proxy Version: v1.32.2
PodCIDR: 10.244.0.0/24
PodCIDRs: 10.244.0.0/24
Non-terminated Pods: (13 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age
--------- ---- ------------ ---------- --------------- ------------- ---
default busybox 0 (0%) 0 (0%) 0 (0%) 0 (0%) 3m14s
default hello-world-app-7d9564db4-8dhl4 0 (0%) 0 (0%) 0 (0%) 0 (0%) 1s
default nginx 0 (0%) 0 (0%) 0 (0%) 0 (0%) 2m21s
ingress-nginx ingress-nginx-controller-56d7c84fd4-xdmt9 100m (5%) 0 (0%) 90Mi (2%) 0 (0%) 4m31s
kube-system amd-gpu-device-plugin-5pkbv 0 (0%) 0 (0%) 0 (0%) 0 (0%) 4m37s
kube-system coredns-668d6bf9bc-z7dq4 100m (5%) 0 (0%) 70Mi (1%) 170Mi (4%) 4m38s
kube-system etcd-addons-012915 100m (5%) 0 (0%) 100Mi (2%) 0 (0%) 4m43s
kube-system kube-apiserver-addons-012915 250m (12%) 0 (0%) 0 (0%) 0 (0%) 4m43s
kube-system kube-controller-manager-addons-012915 200m (10%) 0 (0%) 0 (0%) 0 (0%) 4m43s
kube-system kube-ingress-dns-minikube 0 (0%) 0 (0%) 0 (0%) 0 (0%) 4m35s
kube-system kube-proxy-gfpml 0 (0%) 0 (0%) 0 (0%) 0 (0%) 4m38s
kube-system kube-scheduler-addons-012915 100m (5%) 0 (0%) 0 (0%) 0 (0%) 4m43s
kube-system storage-provisioner 0 (0%) 0 (0%) 0 (0%) 0 (0%) 4m33s
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 850m (42%) 0 (0%)
memory 260Mi (6%) 170Mi (4%)
ephemeral-storage 0 (0%) 0 (0%)
hugepages-2Mi 0 (0%) 0 (0%)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Starting 4m36s kube-proxy
Normal Starting 4m43s kubelet Starting kubelet.
Normal NodeAllocatableEnforced 4m43s kubelet Updated Node Allocatable limit across pods
Normal NodeHasSufficientMemory 4m43s kubelet Node addons-012915 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 4m43s kubelet Node addons-012915 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 4m43s kubelet Node addons-012915 status is now: NodeHasSufficientPID
Normal NodeReady 4m42s kubelet Node addons-012915 status is now: NodeReady
Normal RegisteredNode 4m39s node-controller Node addons-012915 event: Registered Node addons-012915 in Controller
==> dmesg <==
[ +4.074818] systemd-fstab-generator[860]: Ignoring "noauto" option for root device
[ +0.063473] kauditd_printk_skb: 158 callbacks suppressed
[ +5.969885] systemd-fstab-generator[1220]: Ignoring "noauto" option for root device
[ +0.075320] kauditd_printk_skb: 69 callbacks suppressed
[ +4.749538] systemd-fstab-generator[1351]: Ignoring "noauto" option for root device
[ +0.690834] kauditd_printk_skb: 46 callbacks suppressed
[ +5.028787] kauditd_printk_skb: 128 callbacks suppressed
[ +5.089939] kauditd_printk_skb: 159 callbacks suppressed
[ +12.866037] kauditd_printk_skb: 31 callbacks suppressed
[Mar17 12:43] kauditd_printk_skb: 2 callbacks suppressed
[ +13.771731] kauditd_printk_skb: 2 callbacks suppressed
[ +5.655980] kauditd_printk_skb: 44 callbacks suppressed
[ +5.486780] kauditd_printk_skb: 36 callbacks suppressed
[ +8.872962] kauditd_printk_skb: 41 callbacks suppressed
[ +5.030974] kauditd_printk_skb: 11 callbacks suppressed
[ +7.270633] kauditd_printk_skb: 9 callbacks suppressed
[Mar17 12:44] kauditd_printk_skb: 2 callbacks suppressed
[ +6.028036] kauditd_printk_skb: 15 callbacks suppressed
[ +5.114748] kauditd_printk_skb: 42 callbacks suppressed
[ +5.840044] kauditd_printk_skb: 63 callbacks suppressed
[ +7.986434] kauditd_printk_skb: 28 callbacks suppressed
[ +7.230236] kauditd_printk_skb: 20 callbacks suppressed
[ +5.533480] kauditd_printk_skb: 29 callbacks suppressed
[ +15.686562] kauditd_printk_skb: 9 callbacks suppressed
[Mar17 12:46] kauditd_printk_skb: 49 callbacks suppressed
==> etcd [e60f82ce8690170f47fa0a774c43f343703f1c09f94e277299d56ced83465e0f] <==
{"level":"warn","ts":"2025-03-17T12:43:23.671708Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"120.22929ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
{"level":"info","ts":"2025-03-17T12:43:23.671751Z","caller":"traceutil/trace.go:171","msg":"trace[977862219] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1004; }","duration":"120.289304ms","start":"2025-03-17T12:43:23.551450Z","end":"2025-03-17T12:43:23.671739Z","steps":["trace[977862219] 'agreement among raft nodes before linearized reading' (duration: 120.231025ms)"],"step_count":1}
{"level":"warn","ts":"2025-03-17T12:43:23.671877Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"166.327794ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/kube-system/snapshot-controller-leader\" limit:1 ","response":"range_response_count:1 size:499"}
{"level":"info","ts":"2025-03-17T12:43:23.671914Z","caller":"traceutil/trace.go:171","msg":"trace[1519861164] range","detail":"{range_begin:/registry/leases/kube-system/snapshot-controller-leader; range_end:; response_count:1; response_revision:1004; }","duration":"166.378813ms","start":"2025-03-17T12:43:23.505524Z","end":"2025-03-17T12:43:23.671902Z","steps":["trace[1519861164] 'agreement among raft nodes before linearized reading' (duration: 166.303303ms)"],"step_count":1}
{"level":"info","ts":"2025-03-17T12:43:37.337560Z","caller":"traceutil/trace.go:171","msg":"trace[103932370] linearizableReadLoop","detail":"{readStateIndex:1113; appliedIndex:1112; }","duration":"172.5411ms","start":"2025-03-17T12:43:37.165003Z","end":"2025-03-17T12:43:37.337544Z","steps":["trace[103932370] 'read index received' (duration: 171.540007ms)","trace[103932370] 'applied index is now lower than readState.Index' (duration: 1.000664ms)"],"step_count":2}
{"level":"warn","ts":"2025-03-17T12:43:37.337738Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"172.711672ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
{"level":"info","ts":"2025-03-17T12:43:37.337807Z","caller":"traceutil/trace.go:171","msg":"trace[2048962122] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1085; }","duration":"172.811552ms","start":"2025-03-17T12:43:37.164986Z","end":"2025-03-17T12:43:37.337798Z","steps":["trace[2048962122] 'agreement among raft nodes before linearized reading' (duration: 172.671788ms)"],"step_count":1}
{"level":"info","ts":"2025-03-17T12:43:37.338582Z","caller":"traceutil/trace.go:171","msg":"trace[749594463] transaction","detail":"{read_only:false; response_revision:1085; number_of_response:1; }","duration":"280.791133ms","start":"2025-03-17T12:43:37.057737Z","end":"2025-03-17T12:43:37.338528Z","steps":["trace[749594463] 'process raft request' (duration: 278.850485ms)"],"step_count":1}
{"level":"info","ts":"2025-03-17T12:44:04.231386Z","caller":"traceutil/trace.go:171","msg":"trace[728916456] transaction","detail":"{read_only:false; number_of_response:1; response_revision:1213; }","duration":"304.101976ms","start":"2025-03-17T12:44:03.927193Z","end":"2025-03-17T12:44:04.231295Z","steps":["trace[728916456] 'process raft request' (duration: 303.979351ms)"],"step_count":1}
{"level":"warn","ts":"2025-03-17T12:44:04.231561Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-03-17T12:44:03.927183Z","time spent":"304.26024ms","remote":"127.0.0.1:59508","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":70,"response count":0,"response size":41,"request content":"compare:<target:MOD key:\"/registry/events/gcp-auth/gcp-auth-cd9db85c-96xlv.182d97a1707d6318\" mod_revision:815 > success:<request_delete_range:<key:\"/registry/events/gcp-auth/gcp-auth-cd9db85c-96xlv.182d97a1707d6318\" > > failure:<request_range:<key:\"/registry/events/gcp-auth/gcp-auth-cd9db85c-96xlv.182d97a1707d6318\" > >"}
{"level":"info","ts":"2025-03-17T12:44:04.236906Z","caller":"traceutil/trace.go:171","msg":"trace[1732004958] linearizableReadLoop","detail":"{readStateIndex:1248; appliedIndex:1247; }","duration":"171.164353ms","start":"2025-03-17T12:44:04.065731Z","end":"2025-03-17T12:44:04.236896Z","steps":["trace[1732004958] 'read index received' (duration: 165.939649ms)","trace[1732004958] 'applied index is now lower than readState.Index' (duration: 5.224337ms)"],"step_count":2}
{"level":"info","ts":"2025-03-17T12:44:04.237010Z","caller":"traceutil/trace.go:171","msg":"trace[1184029412] transaction","detail":"{read_only:false; response_revision:1214; number_of_response:1; }","duration":"308.780397ms","start":"2025-03-17T12:44:03.928224Z","end":"2025-03-17T12:44:04.237004Z","steps":["trace[1184029412] 'process raft request' (duration: 308.595788ms)"],"step_count":1}
{"level":"warn","ts":"2025-03-17T12:44:04.237079Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-03-17T12:44:03.928214Z","time spent":"308.825144ms","remote":"127.0.0.1:59130","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1098,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1207 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1025 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
{"level":"warn","ts":"2025-03-17T12:44:04.237249Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"171.470371ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/priorityclasses/\" range_end:\"/registry/priorityclasses0\" count_only:true ","response":"range_response_count:0 size:7"}
{"level":"info","ts":"2025-03-17T12:44:04.237299Z","caller":"traceutil/trace.go:171","msg":"trace[1900314030] range","detail":"{range_begin:/registry/priorityclasses/; range_end:/registry/priorityclasses0; response_count:0; response_revision:1214; }","duration":"171.583605ms","start":"2025-03-17T12:44:04.065707Z","end":"2025-03-17T12:44:04.237291Z","steps":["trace[1900314030] 'agreement among raft nodes before linearized reading' (duration: 171.478792ms)"],"step_count":1}
{"level":"info","ts":"2025-03-17T12:44:29.060435Z","caller":"traceutil/trace.go:171","msg":"trace[1879821724] linearizableReadLoop","detail":"{readStateIndex:1521; appliedIndex:1520; }","duration":"166.096471ms","start":"2025-03-17T12:44:28.894323Z","end":"2025-03-17T12:44:29.060419Z","steps":["trace[1879821724] 'read index received' (duration: 165.93886ms)","trace[1879821724] 'applied index is now lower than readState.Index' (duration: 157.163µs)"],"step_count":2}
{"level":"info","ts":"2025-03-17T12:44:29.060522Z","caller":"traceutil/trace.go:171","msg":"trace[756830991] transaction","detail":"{read_only:false; response_revision:1480; number_of_response:1; }","duration":"327.951814ms","start":"2025-03-17T12:44:28.732563Z","end":"2025-03-17T12:44:29.060515Z","steps":["trace[756830991] 'process raft request' (duration: 327.728886ms)"],"step_count":1}
{"level":"warn","ts":"2025-03-17T12:44:29.060598Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-03-17T12:44:28.732546Z","time spent":"327.993058ms","remote":"127.0.0.1:59242","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":539,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io\" mod_revision:1454 > success:<request_put:<key:\"/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io\" value_size:452 >> failure:<request_range:<key:\"/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io\" > >"}
{"level":"warn","ts":"2025-03-17T12:44:29.060630Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"154.836541ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
{"level":"info","ts":"2025-03-17T12:44:29.060674Z","caller":"traceutil/trace.go:171","msg":"trace[1856043071] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1480; }","duration":"154.89849ms","start":"2025-03-17T12:44:28.905762Z","end":"2025-03-17T12:44:29.060661Z","steps":["trace[1856043071] 'agreement among raft nodes before linearized reading' (duration: 154.837496ms)"],"step_count":1}
{"level":"warn","ts":"2025-03-17T12:44:29.060821Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"142.803134ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
{"level":"warn","ts":"2025-03-17T12:44:29.060827Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"166.501123ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/kube-system/snapshot-controller-leader\" limit:1 ","response":"range_response_count:1 size:499"}
{"level":"info","ts":"2025-03-17T12:44:29.060841Z","caller":"traceutil/trace.go:171","msg":"trace[1756805791] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1480; }","duration":"142.824169ms","start":"2025-03-17T12:44:28.918010Z","end":"2025-03-17T12:44:29.060834Z","steps":["trace[1756805791] 'agreement among raft nodes before linearized reading' (duration: 142.794282ms)"],"step_count":1}
{"level":"info","ts":"2025-03-17T12:44:29.060844Z","caller":"traceutil/trace.go:171","msg":"trace[1817373682] range","detail":"{range_begin:/registry/leases/kube-system/snapshot-controller-leader; range_end:; response_count:1; response_revision:1480; }","duration":"166.538356ms","start":"2025-03-17T12:44:28.894300Z","end":"2025-03-17T12:44:29.060839Z","steps":["trace[1817373682] 'agreement among raft nodes before linearized reading' (duration: 166.471494ms)"],"step_count":1}
{"level":"info","ts":"2025-03-17T12:44:32.380178Z","caller":"traceutil/trace.go:171","msg":"trace[763296153] transaction","detail":"{read_only:false; response_revision:1511; number_of_response:1; }","duration":"171.594633ms","start":"2025-03-17T12:44:32.208567Z","end":"2025-03-17T12:44:32.380161Z","steps":["trace[763296153] 'process raft request' (duration: 171.478484ms)"],"step_count":1}
==> kernel <==
12:46:58 up 5 min, 0 users, load average: 0.50, 1.19, 0.64
Linux addons-012915 5.10.207 #1 SMP Tue Jan 14 08:15:54 UTC 2025 x86_64 GNU/Linux
PRETTY_NAME="Buildroot 2023.02.9"
==> kube-apiserver [c159463a3ceb104234df8916a5f45b4790ff840cb429031de6dac13d8d11ecf8] <==
I0317 12:43:03.973063 1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
E0317 12:43:55.552768 1 conn.go:339] Error on socket receive: read tcp 192.168.39.84:8443->192.168.39.1:58516: use of closed network connection
E0317 12:43:55.722174 1 conn.go:339] Error on socket receive: read tcp 192.168.39.84:8443->192.168.39.1:58540: use of closed network connection
I0317 12:44:17.827179 1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.107.19.214"}
I0317 12:44:29.826897 1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
W0317 12:44:30.944384 1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
E0317 12:44:33.236074 1 authentication.go:74] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
I0317 12:44:37.031022 1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
I0317 12:44:37.221083 1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.107.249.155"}
I0317 12:44:40.501198 1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
I0317 12:44:59.703259 1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
I0317 12:44:59.703379 1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
I0317 12:44:59.727662 1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
I0317 12:44:59.728025 1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
I0317 12:44:59.780571 1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
I0317 12:44:59.780706 1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
I0317 12:44:59.838569 1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
I0317 12:44:59.838619 1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
I0317 12:44:59.846044 1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
I0317 12:44:59.846084 1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
W0317 12:45:00.839309 1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
W0317 12:45:00.846498 1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
W0317 12:45:00.941103 1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
I0317 12:45:04.943751 1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
I0317 12:46:57.308248 1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.99.196.201"}
==> kube-controller-manager [4a2bc102a5d64f2f40e006e80ed3dbb37f3996b3e6daaae677a0e5bbbdf293d0] <==
E0317 12:45:55.285844 1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
W0317 12:45:59.641058 1 reflector.go:362] The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking, err = the server could not find the requested resource
E0317 12:45:59.641984 1 metadata.go:231] "The watchlist request ended with an error, falling back to the standard LIST semantics" err="the server could not find the requested resource" resource="snapshot.storage.k8s.io/v1, Resource=volumesnapshots"
W0317 12:45:59.642857 1 reflector.go:569] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0317 12:45:59.642894 1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
W0317 12:46:15.609547 1 reflector.go:362] The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking, err = the server could not find the requested resource
E0317 12:46:15.610531 1 metadata.go:231] "The watchlist request ended with an error, falling back to the standard LIST semantics" err="the server could not find the requested resource" resource="snapshot.storage.k8s.io/v1, Resource=volumesnapshotclasses"
W0317 12:46:15.611642 1 reflector.go:569] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0317 12:46:15.611722 1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
W0317 12:46:23.043373 1 reflector.go:362] The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking, err = the server could not find the requested resource
E0317 12:46:23.044392 1 metadata.go:231] "The watchlist request ended with an error, falling back to the standard LIST semantics" err="the server could not find the requested resource" resource="snapshot.storage.k8s.io/v1, Resource=volumesnapshotcontents"
W0317 12:46:23.045180 1 reflector.go:569] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0317 12:46:23.045208 1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
W0317 12:46:32.361507 1 reflector.go:362] The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking, err = the server could not find the requested resource
E0317 12:46:32.362446 1 metadata.go:231] "The watchlist request ended with an error, falling back to the standard LIST semantics" err="the server could not find the requested resource" resource="gadget.kinvolk.io/v1alpha1, Resource=traces"
W0317 12:46:32.363155 1 reflector.go:569] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0317 12:46:32.363210 1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
W0317 12:46:55.507190 1 reflector.go:362] The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking, err = the server could not find the requested resource
E0317 12:46:55.507972 1 metadata.go:231] "The watchlist request ended with an error, falling back to the standard LIST semantics" err="the server could not find the requested resource" resource="snapshot.storage.k8s.io/v1, Resource=volumesnapshots"
W0317 12:46:55.509475 1 reflector.go:569] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0317 12:46:55.509516 1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
I0317 12:46:57.118701 1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-7d9564db4" duration="29.665451ms"
I0317 12:46:57.130868 1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-7d9564db4" duration="12.083665ms"
I0317 12:46:57.131225 1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-7d9564db4" duration="41.035µs"
I0317 12:46:57.140393 1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-7d9564db4" duration="68.481µs"
==> kube-proxy [5d9804125479f6a1365b6354df6e0220e2d0d6a2af965300bf6bee19a352c513] <==
add table ip kube-proxy
^^^^^^^^^^^^^^^^^^^^^^^^
>
E0317 12:42:21.964854 1 proxier.go:733] "Error cleaning up nftables rules" err=<
could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
add table ip6 kube-proxy
^^^^^^^^^^^^^^^^^^^^^^^^^
>
I0317 12:42:21.998666 1 server.go:698] "Successfully retrieved node IP(s)" IPs=["192.168.39.84"]
E0317 12:42:21.998745 1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
I0317 12:42:22.087441 1 server_linux.go:147] "No iptables support for family" ipFamily="IPv6"
I0317 12:42:22.087511 1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
I0317 12:42:22.087534 1 server_linux.go:170] "Using iptables Proxier"
I0317 12:42:22.090085 1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
I0317 12:42:22.090377 1 server.go:497] "Version info" version="v1.32.2"
I0317 12:42:22.090400 1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I0317 12:42:22.094843 1 config.go:199] "Starting service config controller"
I0317 12:42:22.094878 1 shared_informer.go:313] Waiting for caches to sync for service config
I0317 12:42:22.094906 1 config.go:105] "Starting endpoint slice config controller"
I0317 12:42:22.094910 1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
I0317 12:42:22.098376 1 config.go:329] "Starting node config controller"
I0317 12:42:22.098398 1 shared_informer.go:313] Waiting for caches to sync for node config
I0317 12:42:22.196848 1 shared_informer.go:320] Caches are synced for endpoint slice config
I0317 12:42:22.196899 1 shared_informer.go:320] Caches are synced for service config
I0317 12:42:22.198464 1 shared_informer.go:320] Caches are synced for node config
==> kube-scheduler [1d73a8dd6794a66be96fa55340cfef9c096d211461a6597900775ee7fb31061d] <==
W0317 12:42:12.724943 1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
E0317 12:42:12.724967 1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
W0317 12:42:12.725000 1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
E0317 12:42:12.725040 1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
W0317 12:42:12.725056 1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
E0317 12:42:12.725076 1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
W0317 12:42:12.725255 1 reflector.go:569] runtime/asm_amd64.s:1700: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0317 12:42:12.725313 1 reflector.go:166] "Unhandled Error" err="runtime/asm_amd64.s:1700: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
W0317 12:42:13.531058 1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
E0317 12:42:13.531153 1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
W0317 12:42:13.543635 1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
E0317 12:42:13.543716 1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
W0317 12:42:13.582301 1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
E0317 12:42:13.582525 1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
W0317 12:42:13.604241 1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
E0317 12:42:13.604404 1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
W0317 12:42:13.664692 1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
E0317 12:42:13.664813 1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
W0317 12:42:13.794966 1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
E0317 12:42:13.795131 1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
W0317 12:42:13.826935 1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "volumeattachments" in API group "storage.k8s.io" at the cluster scope
E0317 12:42:13.826975 1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.VolumeAttachment: failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
W0317 12:42:13.871365 1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
E0317 12:42:13.871474 1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
I0317 12:42:14.317805 1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
==> kubelet <==
Mar 17 12:46:15 addons-012915 kubelet[1227]: Perhaps ip6tables or your kernel needs to be upgraded.
Mar 17 12:46:15 addons-012915 kubelet[1227]: > table="nat" chain="KUBE-KUBELET-CANARY"
Mar 17 12:46:15 addons-012915 kubelet[1227]: E0317 12:46:15.913970 1227 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1742215575913649049,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595374,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
Mar 17 12:46:15 addons-012915 kubelet[1227]: E0317 12:46:15.914136 1227 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1742215575913649049,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595374,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
Mar 17 12:46:25 addons-012915 kubelet[1227]: E0317 12:46:25.916775 1227 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1742215585916440678,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595374,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
Mar 17 12:46:25 addons-012915 kubelet[1227]: E0317 12:46:25.917383 1227 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1742215585916440678,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595374,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
Mar 17 12:46:35 addons-012915 kubelet[1227]: E0317 12:46:35.919643 1227 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1742215595919213400,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595374,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
Mar 17 12:46:35 addons-012915 kubelet[1227]: E0317 12:46:35.919682 1227 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1742215595919213400,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595374,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
Mar 17 12:46:41 addons-012915 kubelet[1227]: I0317 12:46:41.387955 1227 kubelet_pods.go:1021] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/amd-gpu-device-plugin-5pkbv" secret="" err="secret \"gcp-auth\" not found"
Mar 17 12:46:45 addons-012915 kubelet[1227]: E0317 12:46:45.921646 1227 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1742215605921301024,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595374,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
Mar 17 12:46:45 addons-012915 kubelet[1227]: E0317 12:46:45.921919 1227 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1742215605921301024,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595374,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
Mar 17 12:46:55 addons-012915 kubelet[1227]: E0317 12:46:55.924634 1227 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1742215615924311190,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595374,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
Mar 17 12:46:55 addons-012915 kubelet[1227]: E0317 12:46:55.924970 1227 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1742215615924311190,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595374,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
Mar 17 12:46:57 addons-012915 kubelet[1227]: I0317 12:46:57.110483 1227 memory_manager.go:355] "RemoveStaleState removing state" podUID="5eeb38fe-0c9f-4504-9741-270bd6332865" containerName="task-pv-container"
Mar 17 12:46:57 addons-012915 kubelet[1227]: I0317 12:46:57.110569 1227 memory_manager.go:355] "RemoveStaleState removing state" podUID="10d4e550-08b3-4311-afc0-fbaa5490aa26" containerName="volume-snapshot-controller"
Mar 17 12:46:57 addons-012915 kubelet[1227]: I0317 12:46:57.110579 1227 memory_manager.go:355] "RemoveStaleState removing state" podUID="592324f8-e091-4a7a-a486-93d54a56c0f1" containerName="csi-attacher"
Mar 17 12:46:57 addons-012915 kubelet[1227]: I0317 12:46:57.110585 1227 memory_manager.go:355] "RemoveStaleState removing state" podUID="6895b42e-cbbb-4c89-93d5-601d91db4e4e" containerName="node-driver-registrar"
Mar 17 12:46:57 addons-012915 kubelet[1227]: I0317 12:46:57.110592 1227 memory_manager.go:355] "RemoveStaleState removing state" podUID="6895b42e-cbbb-4c89-93d5-601d91db4e4e" containerName="hostpath"
Mar 17 12:46:57 addons-012915 kubelet[1227]: I0317 12:46:57.110597 1227 memory_manager.go:355] "RemoveStaleState removing state" podUID="6895b42e-cbbb-4c89-93d5-601d91db4e4e" containerName="liveness-probe"
Mar 17 12:46:57 addons-012915 kubelet[1227]: I0317 12:46:57.110602 1227 memory_manager.go:355] "RemoveStaleState removing state" podUID="6895b42e-cbbb-4c89-93d5-601d91db4e4e" containerName="csi-external-health-monitor-controller"
Mar 17 12:46:57 addons-012915 kubelet[1227]: I0317 12:46:57.110608 1227 memory_manager.go:355] "RemoveStaleState removing state" podUID="6895b42e-cbbb-4c89-93d5-601d91db4e4e" containerName="csi-provisioner"
Mar 17 12:46:57 addons-012915 kubelet[1227]: I0317 12:46:57.110614 1227 memory_manager.go:355] "RemoveStaleState removing state" podUID="610b15f2-2636-4b8f-9363-d7c6eca55342" containerName="csi-resizer"
Mar 17 12:46:57 addons-012915 kubelet[1227]: I0317 12:46:57.110620 1227 memory_manager.go:355] "RemoveStaleState removing state" podUID="6895b42e-cbbb-4c89-93d5-601d91db4e4e" containerName="csi-snapshotter"
Mar 17 12:46:57 addons-012915 kubelet[1227]: I0317 12:46:57.110626 1227 memory_manager.go:355] "RemoveStaleState removing state" podUID="a32b085e-c793-429b-9098-8df28c689c6d" containerName="volume-snapshot-controller"
Mar 17 12:46:57 addons-012915 kubelet[1227]: I0317 12:46:57.305410 1227 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kcvbh\" (UniqueName: \"kubernetes.io/projected/f8a61344-cc0c-45b7-b157-e7662713cb83-kube-api-access-kcvbh\") pod \"hello-world-app-7d9564db4-8dhl4\" (UID: \"f8a61344-cc0c-45b7-b157-e7662713cb83\") " pod="default/hello-world-app-7d9564db4-8dhl4"
==> storage-provisioner [8be226d7c03469fc7293b21c8dff4b351bcc17daa0d40e8e38e9703a232b644b] <==
I0317 12:42:26.773425 1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
I0317 12:42:26.846465 1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
I0317 12:42:26.846531 1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
I0317 12:42:26.860041 1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
I0317 12:42:26.860559 1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-012915_b267882d-687d-412c-ae15-c6731e4f36e7!
I0317 12:42:26.861310 1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"727e4127-f891-4f4c-960c-bbe037bbdce9", APIVersion:"v1", ResourceVersion:"659", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-012915_b267882d-687d-412c-ae15-c6731e4f36e7 became leader
I0317 12:42:26.977150 1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-012915_b267882d-687d-412c-ae15-c6731e4f36e7!
-- /stdout --
helpers_test.go:254: (dbg) Run: out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-012915 -n addons-012915
helpers_test.go:261: (dbg) Run: kubectl --context addons-012915 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: hello-world-app-7d9564db4-8dhl4 ingress-nginx-admission-create-hq84q ingress-nginx-admission-patch-66l7h
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:277: (dbg) Run: kubectl --context addons-012915 describe pod hello-world-app-7d9564db4-8dhl4 ingress-nginx-admission-create-hq84q ingress-nginx-admission-patch-66l7h
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-012915 describe pod hello-world-app-7d9564db4-8dhl4 ingress-nginx-admission-create-hq84q ingress-nginx-admission-patch-66l7h: exit status 1 (63.996372ms)
-- stdout --
Name: hello-world-app-7d9564db4-8dhl4
Namespace: default
Priority: 0
Service Account: default
Node: addons-012915/192.168.39.84
Start Time: Mon, 17 Mar 2025 12:46:57 +0000
Labels: app=hello-world-app
pod-template-hash=7d9564db4
Annotations: <none>
Status: Pending
IP:
IPs: <none>
Controlled By: ReplicaSet/hello-world-app-7d9564db4
Containers:
hello-world-app:
Container ID:
Image: docker.io/kicbase/echo-server:1.0
Image ID:
Port: 8080/TCP
Host Port: 0/TCP
State: Waiting
Reason: ContainerCreating
Ready: False
Restart Count: 0
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-kcvbh (ro)
Conditions:
Type Status
PodReadyToStartContainers False
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
kube-api-access-kcvbh:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 2s default-scheduler Successfully assigned default/hello-world-app-7d9564db4-8dhl4 to addons-012915
Normal Pulling 2s kubelet Pulling image "docker.io/kicbase/echo-server:1.0"
-- /stdout --
** stderr **
Error from server (NotFound): pods "ingress-nginx-admission-create-hq84q" not found
Error from server (NotFound): pods "ingress-nginx-admission-patch-66l7h" not found
** /stderr **
helpers_test.go:279: kubectl --context addons-012915 describe pod hello-world-app-7d9564db4-8dhl4 ingress-nginx-admission-create-hq84q ingress-nginx-admission-patch-66l7h: exit status 1
addons_test.go:992: (dbg) Run: out/minikube-linux-amd64 -p addons-012915 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-012915 addons disable ingress-dns --alsologtostderr -v=1: (1.041171122s)
addons_test.go:992: (dbg) Run: out/minikube-linux-amd64 -p addons-012915 addons disable ingress --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-012915 addons disable ingress --alsologtostderr -v=1: (7.68948455s)
--- FAIL: TestAddons/parallel/Ingress (151.44s)