=== RUN TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress
=== CONT TestAddons/parallel/Ingress
addons_test.go:207: (dbg) Run: kubectl --context addons-823768 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:232: (dbg) Run: kubectl --context addons-823768 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:245: (dbg) Run: kubectl --context addons-823768 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [66c3042c-5ca2-4e67-bbd5-02c9c84af6ea] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [66c3042c-5ca2-4e67-bbd5-02c9c84af6ea] Running
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 10.004392724s
I0120 15:08:12.435477 2136749 kapi.go:150] Service nginx in namespace default found.
addons_test.go:262: (dbg) Run: out/minikube-linux-amd64 -p addons-823768 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-823768 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m10.062358054s)
** stderr **
ssh: Process exited with status 28
** /stderr **
addons_test.go:278: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:286: (dbg) Run: kubectl --context addons-823768 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:291: (dbg) Run: out/minikube-linux-amd64 -p addons-823768 ip
addons_test.go:297: (dbg) Run: nslookup hello-john.test 192.168.39.158
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run: out/minikube-linux-amd64 status --format={{.Host}} -p addons-823768 -n addons-823768
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======> post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run: out/minikube-linux-amd64 -p addons-823768 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-823768 logs -n 25: (1.468299057s)
helpers_test.go:252: TestAddons/parallel/Ingress logs:
-- stdout --
==> Audit <==
|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
| Command | Args | Profile | User | Version | Start Time | End Time |
|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
| start | -o=json --download-only | download-only-647713 | jenkins | v1.35.0 | 20 Jan 25 15:04 UTC | |
| | -p download-only-647713 | | | | | |
| | --force --alsologtostderr | | | | | |
| | --kubernetes-version=v1.32.0 | | | | | |
| | --container-runtime=crio | | | | | |
| | --driver=kvm2 | | | | | |
| | --container-runtime=crio | | | | | |
| delete | --all | minikube | jenkins | v1.35.0 | 20 Jan 25 15:04 UTC | 20 Jan 25 15:04 UTC |
| delete | -p download-only-647713 | download-only-647713 | jenkins | v1.35.0 | 20 Jan 25 15:04 UTC | 20 Jan 25 15:04 UTC |
| delete | -p download-only-193100 | download-only-193100 | jenkins | v1.35.0 | 20 Jan 25 15:04 UTC | 20 Jan 25 15:04 UTC |
| delete | -p download-only-647713 | download-only-647713 | jenkins | v1.35.0 | 20 Jan 25 15:04 UTC | 20 Jan 25 15:04 UTC |
| start | --download-only -p | binary-mirror-318745 | jenkins | v1.35.0 | 20 Jan 25 15:04 UTC | |
| | binary-mirror-318745 | | | | | |
| | --alsologtostderr | | | | | |
| | --binary-mirror | | | | | |
| | http://127.0.0.1:45603 | | | | | |
| | --driver=kvm2 | | | | | |
| | --container-runtime=crio | | | | | |
| delete | -p binary-mirror-318745 | binary-mirror-318745 | jenkins | v1.35.0 | 20 Jan 25 15:04 UTC | 20 Jan 25 15:04 UTC |
| addons | disable dashboard -p | addons-823768 | jenkins | v1.35.0 | 20 Jan 25 15:04 UTC | |
| | addons-823768 | | | | | |
| addons | enable dashboard -p | addons-823768 | jenkins | v1.35.0 | 20 Jan 25 15:04 UTC | |
| | addons-823768 | | | | | |
| start | -p addons-823768 --wait=true | addons-823768 | jenkins | v1.35.0 | 20 Jan 25 15:04 UTC | 20 Jan 25 15:07 UTC |
| | --memory=4000 --alsologtostderr | | | | | |
| | --addons=registry | | | | | |
| | --addons=metrics-server | | | | | |
| | --addons=volumesnapshots | | | | | |
| | --addons=csi-hostpath-driver | | | | | |
| | --addons=gcp-auth | | | | | |
| | --addons=cloud-spanner | | | | | |
| | --addons=inspektor-gadget | | | | | |
| | --addons=nvidia-device-plugin | | | | | |
| | --addons=yakd --addons=volcano | | | | | |
| | --addons=amd-gpu-device-plugin | | | | | |
| | --driver=kvm2 | | | | | |
| | --container-runtime=crio | | | | | |
| | --addons=ingress | | | | | |
| | --addons=ingress-dns | | | | | |
| | --addons=storage-provisioner-rancher | | | | | |
| addons | addons-823768 addons disable | addons-823768 | jenkins | v1.35.0 | 20 Jan 25 15:07 UTC | 20 Jan 25 15:07 UTC |
| | volcano --alsologtostderr -v=1 | | | | | |
| addons | addons-823768 addons disable | addons-823768 | jenkins | v1.35.0 | 20 Jan 25 15:07 UTC | 20 Jan 25 15:07 UTC |
| | gcp-auth --alsologtostderr | | | | | |
| | -v=1 | | | | | |
| addons | enable headlamp | addons-823768 | jenkins | v1.35.0 | 20 Jan 25 15:07 UTC | 20 Jan 25 15:07 UTC |
| | -p addons-823768 | | | | | |
| | --alsologtostderr -v=1 | | | | | |
| addons | addons-823768 addons | addons-823768 | jenkins | v1.35.0 | 20 Jan 25 15:07 UTC | 20 Jan 25 15:07 UTC |
| | disable nvidia-device-plugin | | | | | |
| | --alsologtostderr -v=1 | | | | | |
| ssh | addons-823768 ssh cat | addons-823768 | jenkins | v1.35.0 | 20 Jan 25 15:07 UTC | 20 Jan 25 15:07 UTC |
| | /opt/local-path-provisioner/pvc-f17509c2-6d0e-4c09-9067-5f1359f0d7a1_default_test-pvc/file1 | | | | | |
| addons | addons-823768 addons disable | addons-823768 | jenkins | v1.35.0 | 20 Jan 25 15:07 UTC | 20 Jan 25 15:08 UTC |
| | storage-provisioner-rancher | | | | | |
| | --alsologtostderr -v=1 | | | | | |
| addons | addons-823768 addons | addons-823768 | jenkins | v1.35.0 | 20 Jan 25 15:07 UTC | 20 Jan 25 15:07 UTC |
| | disable cloud-spanner | | | | | |
| | --alsologtostderr -v=1 | | | | | |
| addons | addons-823768 addons disable | addons-823768 | jenkins | v1.35.0 | 20 Jan 25 15:07 UTC | 20 Jan 25 15:07 UTC |
| | headlamp --alsologtostderr | | | | | |
| | -v=1 | | | | | |
| ip | addons-823768 ip | addons-823768 | jenkins | v1.35.0 | 20 Jan 25 15:07 UTC | 20 Jan 25 15:07 UTC |
| addons | addons-823768 addons disable | addons-823768 | jenkins | v1.35.0 | 20 Jan 25 15:07 UTC | 20 Jan 25 15:07 UTC |
| | registry --alsologtostderr | | | | | |
| | -v=1 | | | | | |
| addons | addons-823768 addons | addons-823768 | jenkins | v1.35.0 | 20 Jan 25 15:07 UTC | 20 Jan 25 15:07 UTC |
| | disable metrics-server | | | | | |
| | --alsologtostderr -v=1 | | | | | |
| addons | addons-823768 addons disable | addons-823768 | jenkins | v1.35.0 | 20 Jan 25 15:07 UTC | 20 Jan 25 15:08 UTC |
| | yakd --alsologtostderr -v=1 | | | | | |
| addons | addons-823768 addons | addons-823768 | jenkins | v1.35.0 | 20 Jan 25 15:07 UTC | 20 Jan 25 15:08 UTC |
| | disable inspektor-gadget | | | | | |
| | --alsologtostderr -v=1 | | | | | |
| ssh | addons-823768 ssh curl -s | addons-823768 | jenkins | v1.35.0 | 20 Jan 25 15:08 UTC | |
| | http://127.0.0.1/ -H 'Host: | | | | | |
| | nginx.example.com' | | | | | |
| ip | addons-823768 ip | addons-823768 | jenkins | v1.35.0 | 20 Jan 25 15:10 UTC | 20 Jan 25 15:10 UTC |
|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
==> Last Start <==
Log file created at: 2025/01/20 15:04:57
Running on machine: ubuntu-20-agent-15
Binary: Built with gc go1.23.4 for linux/amd64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I0120 15:04:57.624256 2137369 out.go:345] Setting OutFile to fd 1 ...
I0120 15:04:57.624398 2137369 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0120 15:04:57.624409 2137369 out.go:358] Setting ErrFile to fd 2...
I0120 15:04:57.624415 2137369 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0120 15:04:57.624591 2137369 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20109-2129584/.minikube/bin
I0120 15:04:57.625297 2137369 out.go:352] Setting JSON to false
I0120 15:04:57.626292 2137369 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":24444,"bootTime":1737361054,"procs":184,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1074-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
I0120 15:04:57.626412 2137369 start.go:139] virtualization: kvm guest
I0120 15:04:57.628458 2137369 out.go:177] * [addons-823768] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
I0120 15:04:57.630260 2137369 out.go:177] - MINIKUBE_LOCATION=20109
I0120 15:04:57.630256 2137369 notify.go:220] Checking for updates...
I0120 15:04:57.631582 2137369 out.go:177] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I0120 15:04:57.633104 2137369 out.go:177] - KUBECONFIG=/home/jenkins/minikube-integration/20109-2129584/kubeconfig
I0120 15:04:57.634244 2137369 out.go:177] - MINIKUBE_HOME=/home/jenkins/minikube-integration/20109-2129584/.minikube
I0120 15:04:57.635455 2137369 out.go:177] - MINIKUBE_BIN=out/minikube-linux-amd64
I0120 15:04:57.636773 2137369 out.go:177] - MINIKUBE_FORCE_SYSTEMD=
I0120 15:04:57.638391 2137369 driver.go:394] Setting default libvirt URI to qemu:///system
I0120 15:04:57.672908 2137369 out.go:177] * Using the kvm2 driver based on user configuration
I0120 15:04:57.674463 2137369 start.go:297] selected driver: kvm2
I0120 15:04:57.674489 2137369 start.go:901] validating driver "kvm2" against <nil>
I0120 15:04:57.674515 2137369 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0120 15:04:57.675362 2137369 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0120 15:04:57.675488 2137369 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20109-2129584/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
I0120 15:04:57.691694 2137369 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
I0120 15:04:57.691745 2137369 start_flags.go:310] no existing cluster config was found, will generate one from the flags
I0120 15:04:57.691969 2137369 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I0120 15:04:57.692005 2137369 cni.go:84] Creating CNI manager for ""
I0120 15:04:57.692050 2137369 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
I0120 15:04:57.692059 2137369 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
I0120 15:04:57.692109 2137369 start.go:340] cluster config:
{Name:addons-823768 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:addons-823768 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPl
ugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPause
Interval:1m0s}
I0120 15:04:57.692209 2137369 iso.go:125] acquiring lock: {Name:mkfdd69d29de07488d13f32c54d682aa5b350b99 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0120 15:04:57.695407 2137369 out.go:177] * Starting "addons-823768" primary control-plane node in "addons-823768" cluster
I0120 15:04:57.697150 2137369 preload.go:131] Checking if preload exists for k8s version v1.32.0 and runtime crio
I0120 15:04:57.697201 2137369 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20109-2129584/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4
I0120 15:04:57.697211 2137369 cache.go:56] Caching tarball of preloaded images
I0120 15:04:57.697294 2137369 preload.go:172] Found /home/jenkins/minikube-integration/20109-2129584/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
I0120 15:04:57.697305 2137369 cache.go:59] Finished verifying existence of preloaded tar for v1.32.0 on crio
I0120 15:04:57.697657 2137369 profile.go:143] Saving config to /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/addons-823768/config.json ...
I0120 15:04:57.697681 2137369 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/addons-823768/config.json: {Name:mk4b31787ffc80a58bfaed119855eddc3ee78983 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0120 15:04:57.697836 2137369 start.go:360] acquireMachinesLock for addons-823768: {Name:mkb8bb9d716afe4381507ba751e49800d47b1664 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0120 15:04:57.697883 2137369 start.go:364] duration metric: took 33.177µs to acquireMachinesLock for "addons-823768"
I0120 15:04:57.697901 2137369 start.go:93] Provisioning new machine with config: &{Name:addons-823768 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:addons-823768 Namespa
ce:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptim
izations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}
I0120 15:04:57.697959 2137369 start.go:125] createHost starting for "" (driver="kvm2")
I0120 15:04:57.699982 2137369 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
I0120 15:04:57.700137 2137369 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0120 15:04:57.700187 2137369 main.go:141] libmachine: Launching plugin server for driver kvm2
I0120 15:04:57.715764 2137369 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37055
I0120 15:04:57.716302 2137369 main.go:141] libmachine: () Calling .GetVersion
I0120 15:04:57.717042 2137369 main.go:141] libmachine: Using API Version 1
I0120 15:04:57.717071 2137369 main.go:141] libmachine: () Calling .SetConfigRaw
I0120 15:04:57.717464 2137369 main.go:141] libmachine: () Calling .GetMachineName
I0120 15:04:57.717672 2137369 main.go:141] libmachine: (addons-823768) Calling .GetMachineName
I0120 15:04:57.717839 2137369 main.go:141] libmachine: (addons-823768) Calling .DriverName
I0120 15:04:57.718072 2137369 start.go:159] libmachine.API.Create for "addons-823768" (driver="kvm2")
I0120 15:04:57.718100 2137369 client.go:168] LocalClient.Create starting
I0120 15:04:57.718140 2137369 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/20109-2129584/.minikube/certs/ca.pem
I0120 15:04:57.817798 2137369 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/20109-2129584/.minikube/certs/cert.pem
I0120 15:04:57.956327 2137369 main.go:141] libmachine: Running pre-create checks...
I0120 15:04:57.956353 2137369 main.go:141] libmachine: (addons-823768) Calling .PreCreateCheck
I0120 15:04:57.956945 2137369 main.go:141] libmachine: (addons-823768) Calling .GetConfigRaw
I0120 15:04:57.957429 2137369 main.go:141] libmachine: Creating machine...
I0120 15:04:57.957442 2137369 main.go:141] libmachine: (addons-823768) Calling .Create
I0120 15:04:57.957600 2137369 main.go:141] libmachine: (addons-823768) creating KVM machine...
I0120 15:04:57.957614 2137369 main.go:141] libmachine: (addons-823768) creating network...
I0120 15:04:57.958969 2137369 main.go:141] libmachine: (addons-823768) DBG | found existing default KVM network
I0120 15:04:57.959704 2137369 main.go:141] libmachine: (addons-823768) DBG | I0120 15:04:57.959552 2137391 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000201200}
I0120 15:04:57.959765 2137369 main.go:141] libmachine: (addons-823768) DBG | created network xml:
I0120 15:04:57.959786 2137369 main.go:141] libmachine: (addons-823768) DBG | <network>
I0120 15:04:57.959800 2137369 main.go:141] libmachine: (addons-823768) DBG | <name>mk-addons-823768</name>
I0120 15:04:57.959807 2137369 main.go:141] libmachine: (addons-823768) DBG | <dns enable='no'/>
I0120 15:04:57.959814 2137369 main.go:141] libmachine: (addons-823768) DBG |
I0120 15:04:57.959822 2137369 main.go:141] libmachine: (addons-823768) DBG | <ip address='192.168.39.1' netmask='255.255.255.0'>
I0120 15:04:57.959830 2137369 main.go:141] libmachine: (addons-823768) DBG | <dhcp>
I0120 15:04:57.959836 2137369 main.go:141] libmachine: (addons-823768) DBG | <range start='192.168.39.2' end='192.168.39.253'/>
I0120 15:04:57.959845 2137369 main.go:141] libmachine: (addons-823768) DBG | </dhcp>
I0120 15:04:57.959850 2137369 main.go:141] libmachine: (addons-823768) DBG | </ip>
I0120 15:04:57.959857 2137369 main.go:141] libmachine: (addons-823768) DBG |
I0120 15:04:57.959868 2137369 main.go:141] libmachine: (addons-823768) DBG | </network>
I0120 15:04:57.959880 2137369 main.go:141] libmachine: (addons-823768) DBG |
I0120 15:04:57.965405 2137369 main.go:141] libmachine: (addons-823768) DBG | trying to create private KVM network mk-addons-823768 192.168.39.0/24...
I0120 15:04:58.037588 2137369 main.go:141] libmachine: (addons-823768) DBG | private KVM network mk-addons-823768 192.168.39.0/24 created
I0120 15:04:58.037645 2137369 main.go:141] libmachine: (addons-823768) DBG | I0120 15:04:58.037543 2137391 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/20109-2129584/.minikube
I0120 15:04:58.037659 2137369 main.go:141] libmachine: (addons-823768) setting up store path in /home/jenkins/minikube-integration/20109-2129584/.minikube/machines/addons-823768 ...
I0120 15:04:58.037694 2137369 main.go:141] libmachine: (addons-823768) building disk image from file:///home/jenkins/minikube-integration/20109-2129584/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso
I0120 15:04:58.037727 2137369 main.go:141] libmachine: (addons-823768) Downloading /home/jenkins/minikube-integration/20109-2129584/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20109-2129584/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso...
I0120 15:04:58.314475 2137369 main.go:141] libmachine: (addons-823768) DBG | I0120 15:04:58.314330 2137391 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/20109-2129584/.minikube/machines/addons-823768/id_rsa...
I0120 15:04:58.360414 2137369 main.go:141] libmachine: (addons-823768) DBG | I0120 15:04:58.360209 2137391 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/20109-2129584/.minikube/machines/addons-823768/addons-823768.rawdisk...
I0120 15:04:58.360466 2137369 main.go:141] libmachine: (addons-823768) DBG | Writing magic tar header
I0120 15:04:58.360505 2137369 main.go:141] libmachine: (addons-823768) DBG | Writing SSH key tar header
I0120 15:04:58.360517 2137369 main.go:141] libmachine: (addons-823768) DBG | I0120 15:04:58.360380 2137391 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/20109-2129584/.minikube/machines/addons-823768 ...
I0120 15:04:58.360543 2137369 main.go:141] libmachine: (addons-823768) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20109-2129584/.minikube/machines/addons-823768
I0120 15:04:58.360562 2137369 main.go:141] libmachine: (addons-823768) setting executable bit set on /home/jenkins/minikube-integration/20109-2129584/.minikube/machines/addons-823768 (perms=drwx------)
I0120 15:04:58.360574 2137369 main.go:141] libmachine: (addons-823768) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20109-2129584/.minikube/machines
I0120 15:04:58.360589 2137369 main.go:141] libmachine: (addons-823768) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20109-2129584/.minikube
I0120 15:04:58.360598 2137369 main.go:141] libmachine: (addons-823768) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20109-2129584
I0120 15:04:58.360610 2137369 main.go:141] libmachine: (addons-823768) DBG | checking permissions on dir: /home/jenkins/minikube-integration
I0120 15:04:58.360622 2137369 main.go:141] libmachine: (addons-823768) DBG | checking permissions on dir: /home/jenkins
I0120 15:04:58.360631 2137369 main.go:141] libmachine: (addons-823768) DBG | checking permissions on dir: /home
I0120 15:04:58.360640 2137369 main.go:141] libmachine: (addons-823768) DBG | skipping /home - not owner
I0120 15:04:58.360718 2137369 main.go:141] libmachine: (addons-823768) setting executable bit set on /home/jenkins/minikube-integration/20109-2129584/.minikube/machines (perms=drwxr-xr-x)
I0120 15:04:58.360752 2137369 main.go:141] libmachine: (addons-823768) setting executable bit set on /home/jenkins/minikube-integration/20109-2129584/.minikube (perms=drwxr-xr-x)
I0120 15:04:58.360768 2137369 main.go:141] libmachine: (addons-823768) setting executable bit set on /home/jenkins/minikube-integration/20109-2129584 (perms=drwxrwxr-x)
I0120 15:04:58.360782 2137369 main.go:141] libmachine: (addons-823768) setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
I0120 15:04:58.360799 2137369 main.go:141] libmachine: (addons-823768) setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
I0120 15:04:58.360810 2137369 main.go:141] libmachine: (addons-823768) creating domain...
I0120 15:04:58.362224 2137369 main.go:141] libmachine: (addons-823768) define libvirt domain using xml:
I0120 15:04:58.362248 2137369 main.go:141] libmachine: (addons-823768) <domain type='kvm'>
I0120 15:04:58.362256 2137369 main.go:141] libmachine: (addons-823768) <name>addons-823768</name>
I0120 15:04:58.362262 2137369 main.go:141] libmachine: (addons-823768) <memory unit='MiB'>4000</memory>
I0120 15:04:58.362267 2137369 main.go:141] libmachine: (addons-823768) <vcpu>2</vcpu>
I0120 15:04:58.362272 2137369 main.go:141] libmachine: (addons-823768) <features>
I0120 15:04:58.362277 2137369 main.go:141] libmachine: (addons-823768) <acpi/>
I0120 15:04:58.362282 2137369 main.go:141] libmachine: (addons-823768) <apic/>
I0120 15:04:58.362289 2137369 main.go:141] libmachine: (addons-823768) <pae/>
I0120 15:04:58.362296 2137369 main.go:141] libmachine: (addons-823768)
I0120 15:04:58.362301 2137369 main.go:141] libmachine: (addons-823768) </features>
I0120 15:04:58.362309 2137369 main.go:141] libmachine: (addons-823768) <cpu mode='host-passthrough'>
I0120 15:04:58.362325 2137369 main.go:141] libmachine: (addons-823768)
I0120 15:04:58.362336 2137369 main.go:141] libmachine: (addons-823768) </cpu>
I0120 15:04:58.362342 2137369 main.go:141] libmachine: (addons-823768) <os>
I0120 15:04:58.362350 2137369 main.go:141] libmachine: (addons-823768) <type>hvm</type>
I0120 15:04:58.362356 2137369 main.go:141] libmachine: (addons-823768) <boot dev='cdrom'/>
I0120 15:04:58.362363 2137369 main.go:141] libmachine: (addons-823768) <boot dev='hd'/>
I0120 15:04:58.362369 2137369 main.go:141] libmachine: (addons-823768) <bootmenu enable='no'/>
I0120 15:04:58.362377 2137369 main.go:141] libmachine: (addons-823768) </os>
I0120 15:04:58.362382 2137369 main.go:141] libmachine: (addons-823768) <devices>
I0120 15:04:58.362388 2137369 main.go:141] libmachine: (addons-823768) <disk type='file' device='cdrom'>
I0120 15:04:58.362397 2137369 main.go:141] libmachine: (addons-823768) <source file='/home/jenkins/minikube-integration/20109-2129584/.minikube/machines/addons-823768/boot2docker.iso'/>
I0120 15:04:58.362405 2137369 main.go:141] libmachine: (addons-823768) <target dev='hdc' bus='scsi'/>
I0120 15:04:58.362411 2137369 main.go:141] libmachine: (addons-823768) <readonly/>
I0120 15:04:58.362418 2137369 main.go:141] libmachine: (addons-823768) </disk>
I0120 15:04:58.362432 2137369 main.go:141] libmachine: (addons-823768) <disk type='file' device='disk'>
I0120 15:04:58.362442 2137369 main.go:141] libmachine: (addons-823768) <driver name='qemu' type='raw' cache='default' io='threads' />
I0120 15:04:58.362450 2137369 main.go:141] libmachine: (addons-823768) <source file='/home/jenkins/minikube-integration/20109-2129584/.minikube/machines/addons-823768/addons-823768.rawdisk'/>
I0120 15:04:58.362458 2137369 main.go:141] libmachine: (addons-823768) <target dev='hda' bus='virtio'/>
I0120 15:04:58.362463 2137369 main.go:141] libmachine: (addons-823768) </disk>
I0120 15:04:58.362471 2137369 main.go:141] libmachine: (addons-823768) <interface type='network'>
I0120 15:04:58.362491 2137369 main.go:141] libmachine: (addons-823768) <source network='mk-addons-823768'/>
I0120 15:04:58.362504 2137369 main.go:141] libmachine: (addons-823768) <model type='virtio'/>
I0120 15:04:58.362509 2137369 main.go:141] libmachine: (addons-823768) </interface>
I0120 15:04:58.362514 2137369 main.go:141] libmachine: (addons-823768) <interface type='network'>
I0120 15:04:58.362530 2137369 main.go:141] libmachine: (addons-823768) <source network='default'/>
I0120 15:04:58.362537 2137369 main.go:141] libmachine: (addons-823768) <model type='virtio'/>
I0120 15:04:58.362542 2137369 main.go:141] libmachine: (addons-823768) </interface>
I0120 15:04:58.362547 2137369 main.go:141] libmachine: (addons-823768) <serial type='pty'>
I0120 15:04:58.362552 2137369 main.go:141] libmachine: (addons-823768) <target port='0'/>
I0120 15:04:58.362558 2137369 main.go:141] libmachine: (addons-823768) </serial>
I0120 15:04:58.362565 2137369 main.go:141] libmachine: (addons-823768) <console type='pty'>
I0120 15:04:58.362579 2137369 main.go:141] libmachine: (addons-823768) <target type='serial' port='0'/>
I0120 15:04:58.362587 2137369 main.go:141] libmachine: (addons-823768) </console>
I0120 15:04:58.362594 2137369 main.go:141] libmachine: (addons-823768) <rng model='virtio'>
I0120 15:04:58.362642 2137369 main.go:141] libmachine: (addons-823768) <backend model='random'>/dev/random</backend>
I0120 15:04:58.362668 2137369 main.go:141] libmachine: (addons-823768) </rng>
I0120 15:04:58.362682 2137369 main.go:141] libmachine: (addons-823768)
I0120 15:04:58.362694 2137369 main.go:141] libmachine: (addons-823768)
I0120 15:04:58.362704 2137369 main.go:141] libmachine: (addons-823768) </devices>
I0120 15:04:58.362716 2137369 main.go:141] libmachine: (addons-823768) </domain>
I0120 15:04:58.362728 2137369 main.go:141] libmachine: (addons-823768)
I0120 15:04:58.367308 2137369 main.go:141] libmachine: (addons-823768) DBG | domain addons-823768 has defined MAC address 52:54:00:fe:73:ee in network default
I0120 15:04:58.367817 2137369 main.go:141] libmachine: (addons-823768) DBG | domain addons-823768 has defined MAC address 52:54:00:25:8d:22 in network mk-addons-823768
I0120 15:04:58.367831 2137369 main.go:141] libmachine: (addons-823768) starting domain...
I0120 15:04:58.367843 2137369 main.go:141] libmachine: (addons-823768) ensuring networks are active...
I0120 15:04:58.368477 2137369 main.go:141] libmachine: (addons-823768) Ensuring network default is active
I0120 15:04:58.368765 2137369 main.go:141] libmachine: (addons-823768) Ensuring network mk-addons-823768 is active
I0120 15:04:58.369246 2137369 main.go:141] libmachine: (addons-823768) getting domain XML...
I0120 15:04:58.369915 2137369 main.go:141] libmachine: (addons-823768) creating domain...
I0120 15:04:59.601024 2137369 main.go:141] libmachine: (addons-823768) waiting for IP...
I0120 15:04:59.602003 2137369 main.go:141] libmachine: (addons-823768) DBG | domain addons-823768 has defined MAC address 52:54:00:25:8d:22 in network mk-addons-823768
I0120 15:04:59.602406 2137369 main.go:141] libmachine: (addons-823768) DBG | unable to find current IP address of domain addons-823768 in network mk-addons-823768
I0120 15:04:59.602487 2137369 main.go:141] libmachine: (addons-823768) DBG | I0120 15:04:59.602427 2137391 retry.go:31] will retry after 258.668513ms: waiting for domain to come up
I0120 15:04:59.863113 2137369 main.go:141] libmachine: (addons-823768) DBG | domain addons-823768 has defined MAC address 52:54:00:25:8d:22 in network mk-addons-823768
I0120 15:04:59.863860 2137369 main.go:141] libmachine: (addons-823768) DBG | unable to find current IP address of domain addons-823768 in network mk-addons-823768
I0120 15:04:59.863887 2137369 main.go:141] libmachine: (addons-823768) DBG | I0120 15:04:59.863820 2137391 retry.go:31] will retry after 284.943032ms: waiting for domain to come up
I0120 15:05:00.150387 2137369 main.go:141] libmachine: (addons-823768) DBG | domain addons-823768 has defined MAC address 52:54:00:25:8d:22 in network mk-addons-823768
I0120 15:05:00.150799 2137369 main.go:141] libmachine: (addons-823768) DBG | unable to find current IP address of domain addons-823768 in network mk-addons-823768
I0120 15:05:00.150864 2137369 main.go:141] libmachine: (addons-823768) DBG | I0120 15:05:00.150788 2137391 retry.go:31] will retry after 487.888334ms: waiting for domain to come up
I0120 15:05:00.640607 2137369 main.go:141] libmachine: (addons-823768) DBG | domain addons-823768 has defined MAC address 52:54:00:25:8d:22 in network mk-addons-823768
I0120 15:05:00.641049 2137369 main.go:141] libmachine: (addons-823768) DBG | unable to find current IP address of domain addons-823768 in network mk-addons-823768
I0120 15:05:00.641074 2137369 main.go:141] libmachine: (addons-823768) DBG | I0120 15:05:00.640997 2137391 retry.go:31] will retry after 506.402264ms: waiting for domain to come up
I0120 15:05:01.148692 2137369 main.go:141] libmachine: (addons-823768) DBG | domain addons-823768 has defined MAC address 52:54:00:25:8d:22 in network mk-addons-823768
I0120 15:05:01.149072 2137369 main.go:141] libmachine: (addons-823768) DBG | unable to find current IP address of domain addons-823768 in network mk-addons-823768
I0120 15:05:01.149103 2137369 main.go:141] libmachine: (addons-823768) DBG | I0120 15:05:01.149042 2137391 retry.go:31] will retry after 610.710776ms: waiting for domain to come up
I0120 15:05:01.761084 2137369 main.go:141] libmachine: (addons-823768) DBG | domain addons-823768 has defined MAC address 52:54:00:25:8d:22 in network mk-addons-823768
I0120 15:05:01.761615 2137369 main.go:141] libmachine: (addons-823768) DBG | unable to find current IP address of domain addons-823768 in network mk-addons-823768
I0120 15:05:01.761660 2137369 main.go:141] libmachine: (addons-823768) DBG | I0120 15:05:01.761555 2137391 retry.go:31] will retry after 869.953856ms: waiting for domain to come up
I0120 15:05:02.632849 2137369 main.go:141] libmachine: (addons-823768) DBG | domain addons-823768 has defined MAC address 52:54:00:25:8d:22 in network mk-addons-823768
I0120 15:05:02.633348 2137369 main.go:141] libmachine: (addons-823768) DBG | unable to find current IP address of domain addons-823768 in network mk-addons-823768
I0120 15:05:02.633383 2137369 main.go:141] libmachine: (addons-823768) DBG | I0120 15:05:02.633307 2137391 retry.go:31] will retry after 878.477724ms: waiting for domain to come up
I0120 15:05:03.512981 2137369 main.go:141] libmachine: (addons-823768) DBG | domain addons-823768 has defined MAC address 52:54:00:25:8d:22 in network mk-addons-823768
I0120 15:05:03.513483 2137369 main.go:141] libmachine: (addons-823768) DBG | unable to find current IP address of domain addons-823768 in network mk-addons-823768
I0120 15:05:03.513516 2137369 main.go:141] libmachine: (addons-823768) DBG | I0120 15:05:03.513425 2137391 retry.go:31] will retry after 1.196488457s: waiting for domain to come up
I0120 15:05:04.711923 2137369 main.go:141] libmachine: (addons-823768) DBG | domain addons-823768 has defined MAC address 52:54:00:25:8d:22 in network mk-addons-823768
I0120 15:05:04.712468 2137369 main.go:141] libmachine: (addons-823768) DBG | unable to find current IP address of domain addons-823768 in network mk-addons-823768
I0120 15:05:04.712555 2137369 main.go:141] libmachine: (addons-823768) DBG | I0120 15:05:04.712444 2137391 retry.go:31] will retry after 1.238217465s: waiting for domain to come up
I0120 15:05:05.952338 2137369 main.go:141] libmachine: (addons-823768) DBG | domain addons-823768 has defined MAC address 52:54:00:25:8d:22 in network mk-addons-823768
I0120 15:05:05.952718 2137369 main.go:141] libmachine: (addons-823768) DBG | unable to find current IP address of domain addons-823768 in network mk-addons-823768
I0120 15:05:05.952767 2137369 main.go:141] libmachine: (addons-823768) DBG | I0120 15:05:05.952682 2137391 retry.go:31] will retry after 1.963992606s: waiting for domain to come up
I0120 15:05:07.919115 2137369 main.go:141] libmachine: (addons-823768) DBG | domain addons-823768 has defined MAC address 52:54:00:25:8d:22 in network mk-addons-823768
I0120 15:05:07.919614 2137369 main.go:141] libmachine: (addons-823768) DBG | unable to find current IP address of domain addons-823768 in network mk-addons-823768
I0120 15:05:07.919688 2137369 main.go:141] libmachine: (addons-823768) DBG | I0120 15:05:07.919591 2137391 retry.go:31] will retry after 2.598377206s: waiting for domain to come up
I0120 15:05:10.519561 2137369 main.go:141] libmachine: (addons-823768) DBG | domain addons-823768 has defined MAC address 52:54:00:25:8d:22 in network mk-addons-823768
I0120 15:05:10.519995 2137369 main.go:141] libmachine: (addons-823768) DBG | unable to find current IP address of domain addons-823768 in network mk-addons-823768
I0120 15:05:10.520062 2137369 main.go:141] libmachine: (addons-823768) DBG | I0120 15:05:10.519979 2137391 retry.go:31] will retry after 2.387749397s: waiting for domain to come up
I0120 15:05:12.909148 2137369 main.go:141] libmachine: (addons-823768) DBG | domain addons-823768 has defined MAC address 52:54:00:25:8d:22 in network mk-addons-823768
I0120 15:05:12.909462 2137369 main.go:141] libmachine: (addons-823768) DBG | unable to find current IP address of domain addons-823768 in network mk-addons-823768
I0120 15:05:12.909482 2137369 main.go:141] libmachine: (addons-823768) DBG | I0120 15:05:12.909426 2137391 retry.go:31] will retry after 3.566319877s: waiting for domain to come up
I0120 15:05:16.480251 2137369 main.go:141] libmachine: (addons-823768) DBG | domain addons-823768 has defined MAC address 52:54:00:25:8d:22 in network mk-addons-823768
I0120 15:05:16.480589 2137369 main.go:141] libmachine: (addons-823768) DBG | unable to find current IP address of domain addons-823768 in network mk-addons-823768
I0120 15:05:16.480632 2137369 main.go:141] libmachine: (addons-823768) DBG | I0120 15:05:16.480539 2137391 retry.go:31] will retry after 5.139483327s: waiting for domain to come up
I0120 15:05:21.624584 2137369 main.go:141] libmachine: (addons-823768) DBG | domain addons-823768 has defined MAC address 52:54:00:25:8d:22 in network mk-addons-823768
I0120 15:05:21.625210 2137369 main.go:141] libmachine: (addons-823768) found domain IP: 192.168.39.158
I0120 15:05:21.625248 2137369 main.go:141] libmachine: (addons-823768) DBG | domain addons-823768 has current primary IP address 192.168.39.158 and MAC address 52:54:00:25:8d:22 in network mk-addons-823768
I0120 15:05:21.625255 2137369 main.go:141] libmachine: (addons-823768) reserving static IP address...
I0120 15:05:21.625737 2137369 main.go:141] libmachine: (addons-823768) DBG | unable to find host DHCP lease matching {name: "addons-823768", mac: "52:54:00:25:8d:22", ip: "192.168.39.158"} in network mk-addons-823768
I0120 15:05:21.704346 2137369 main.go:141] libmachine: (addons-823768) DBG | Getting to WaitForSSH function...
I0120 15:05:21.704393 2137369 main.go:141] libmachine: (addons-823768) reserved static IP address 192.168.39.158 for domain addons-823768
I0120 15:05:21.704447 2137369 main.go:141] libmachine: (addons-823768) waiting for SSH...
I0120 15:05:21.707052 2137369 main.go:141] libmachine: (addons-823768) DBG | domain addons-823768 has defined MAC address 52:54:00:25:8d:22 in network mk-addons-823768
I0120 15:05:21.707627 2137369 main.go:141] libmachine: (addons-823768) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:8d:22", ip: ""} in network mk-addons-823768: {Iface:virbr1 ExpiryTime:2025-01-20 16:05:13 +0000 UTC Type:0 Mac:52:54:00:25:8d:22 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:minikube Clientid:01:52:54:00:25:8d:22}
I0120 15:05:21.707662 2137369 main.go:141] libmachine: (addons-823768) DBG | domain addons-823768 has defined IP address 192.168.39.158 and MAC address 52:54:00:25:8d:22 in network mk-addons-823768
I0120 15:05:21.707819 2137369 main.go:141] libmachine: (addons-823768) DBG | Using SSH client type: external
I0120 15:05:21.707849 2137369 main.go:141] libmachine: (addons-823768) DBG | Using SSH private key: /home/jenkins/minikube-integration/20109-2129584/.minikube/machines/addons-823768/id_rsa (-rw-------)
I0120 15:05:21.707888 2137369 main.go:141] libmachine: (addons-823768) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.158 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20109-2129584/.minikube/machines/addons-823768/id_rsa -p 22] /usr/bin/ssh <nil>}
I0120 15:05:21.707907 2137369 main.go:141] libmachine: (addons-823768) DBG | About to run SSH command:
I0120 15:05:21.707924 2137369 main.go:141] libmachine: (addons-823768) DBG | exit 0
I0120 15:05:21.831180 2137369 main.go:141] libmachine: (addons-823768) DBG | SSH cmd err, output: <nil>:
I0120 15:05:21.831428 2137369 main.go:141] libmachine: (addons-823768) KVM machine creation complete
I0120 15:05:21.831824 2137369 main.go:141] libmachine: (addons-823768) Calling .GetConfigRaw
I0120 15:05:21.832433 2137369 main.go:141] libmachine: (addons-823768) Calling .DriverName
I0120 15:05:21.832624 2137369 main.go:141] libmachine: (addons-823768) Calling .DriverName
I0120 15:05:21.832787 2137369 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
I0120 15:05:21.832803 2137369 main.go:141] libmachine: (addons-823768) Calling .GetState
I0120 15:05:21.834150 2137369 main.go:141] libmachine: Detecting operating system of created instance...
I0120 15:05:21.834163 2137369 main.go:141] libmachine: Waiting for SSH to be available...
I0120 15:05:21.834169 2137369 main.go:141] libmachine: Getting to WaitForSSH function...
I0120 15:05:21.834174 2137369 main.go:141] libmachine: (addons-823768) Calling .GetSSHHostname
I0120 15:05:21.836638 2137369 main.go:141] libmachine: (addons-823768) DBG | domain addons-823768 has defined MAC address 52:54:00:25:8d:22 in network mk-addons-823768
I0120 15:05:21.836979 2137369 main.go:141] libmachine: (addons-823768) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:8d:22", ip: ""} in network mk-addons-823768: {Iface:virbr1 ExpiryTime:2025-01-20 16:05:13 +0000 UTC Type:0 Mac:52:54:00:25:8d:22 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:addons-823768 Clientid:01:52:54:00:25:8d:22}
I0120 15:05:21.837011 2137369 main.go:141] libmachine: (addons-823768) DBG | domain addons-823768 has defined IP address 192.168.39.158 and MAC address 52:54:00:25:8d:22 in network mk-addons-823768
I0120 15:05:21.837216 2137369 main.go:141] libmachine: (addons-823768) Calling .GetSSHPort
I0120 15:05:21.837461 2137369 main.go:141] libmachine: (addons-823768) Calling .GetSSHKeyPath
I0120 15:05:21.837656 2137369 main.go:141] libmachine: (addons-823768) Calling .GetSSHKeyPath
I0120 15:05:21.837855 2137369 main.go:141] libmachine: (addons-823768) Calling .GetSSHUsername
I0120 15:05:21.838060 2137369 main.go:141] libmachine: Using SSH client type: native
I0120 15:05:21.838317 2137369 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil> [] 0s} 192.168.39.158 22 <nil> <nil>}
I0120 15:05:21.838332 2137369 main.go:141] libmachine: About to run SSH command:
exit 0
I0120 15:05:21.938133 2137369 main.go:141] libmachine: SSH cmd err, output: <nil>:
I0120 15:05:21.938165 2137369 main.go:141] libmachine: Detecting the provisioner...
I0120 15:05:21.938176 2137369 main.go:141] libmachine: (addons-823768) Calling .GetSSHHostname
I0120 15:05:21.941079 2137369 main.go:141] libmachine: (addons-823768) DBG | domain addons-823768 has defined MAC address 52:54:00:25:8d:22 in network mk-addons-823768
I0120 15:05:21.941442 2137369 main.go:141] libmachine: (addons-823768) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:8d:22", ip: ""} in network mk-addons-823768: {Iface:virbr1 ExpiryTime:2025-01-20 16:05:13 +0000 UTC Type:0 Mac:52:54:00:25:8d:22 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:addons-823768 Clientid:01:52:54:00:25:8d:22}
I0120 15:05:21.941472 2137369 main.go:141] libmachine: (addons-823768) DBG | domain addons-823768 has defined IP address 192.168.39.158 and MAC address 52:54:00:25:8d:22 in network mk-addons-823768
I0120 15:05:21.941599 2137369 main.go:141] libmachine: (addons-823768) Calling .GetSSHPort
I0120 15:05:21.941824 2137369 main.go:141] libmachine: (addons-823768) Calling .GetSSHKeyPath
I0120 15:05:21.942016 2137369 main.go:141] libmachine: (addons-823768) Calling .GetSSHKeyPath
I0120 15:05:21.942197 2137369 main.go:141] libmachine: (addons-823768) Calling .GetSSHUsername
I0120 15:05:21.942359 2137369 main.go:141] libmachine: Using SSH client type: native
I0120 15:05:21.942538 2137369 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil> [] 0s} 192.168.39.158 22 <nil> <nil>}
I0120 15:05:21.942550 2137369 main.go:141] libmachine: About to run SSH command:
cat /etc/os-release
I0120 15:05:22.044310 2137369 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
VERSION=2023.02.9-dirty
ID=buildroot
VERSION_ID=2023.02.9
PRETTY_NAME="Buildroot 2023.02.9"
I0120 15:05:22.044405 2137369 main.go:141] libmachine: found compatible host: buildroot
I0120 15:05:22.044421 2137369 main.go:141] libmachine: Provisioning with buildroot...
I0120 15:05:22.044435 2137369 main.go:141] libmachine: (addons-823768) Calling .GetMachineName
I0120 15:05:22.044699 2137369 buildroot.go:166] provisioning hostname "addons-823768"
I0120 15:05:22.044733 2137369 main.go:141] libmachine: (addons-823768) Calling .GetMachineName
I0120 15:05:22.044923 2137369 main.go:141] libmachine: (addons-823768) Calling .GetSSHHostname
I0120 15:05:22.047943 2137369 main.go:141] libmachine: (addons-823768) DBG | domain addons-823768 has defined MAC address 52:54:00:25:8d:22 in network mk-addons-823768
I0120 15:05:22.048353 2137369 main.go:141] libmachine: (addons-823768) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:8d:22", ip: ""} in network mk-addons-823768: {Iface:virbr1 ExpiryTime:2025-01-20 16:05:13 +0000 UTC Type:0 Mac:52:54:00:25:8d:22 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:addons-823768 Clientid:01:52:54:00:25:8d:22}
I0120 15:05:22.048374 2137369 main.go:141] libmachine: (addons-823768) DBG | domain addons-823768 has defined IP address 192.168.39.158 and MAC address 52:54:00:25:8d:22 in network mk-addons-823768
I0120 15:05:22.048517 2137369 main.go:141] libmachine: (addons-823768) Calling .GetSSHPort
I0120 15:05:22.048723 2137369 main.go:141] libmachine: (addons-823768) Calling .GetSSHKeyPath
I0120 15:05:22.048877 2137369 main.go:141] libmachine: (addons-823768) Calling .GetSSHKeyPath
I0120 15:05:22.048970 2137369 main.go:141] libmachine: (addons-823768) Calling .GetSSHUsername
I0120 15:05:22.049121 2137369 main.go:141] libmachine: Using SSH client type: native
I0120 15:05:22.049312 2137369 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil> [] 0s} 192.168.39.158 22 <nil> <nil>}
I0120 15:05:22.049324 2137369 main.go:141] libmachine: About to run SSH command:
sudo hostname addons-823768 && echo "addons-823768" | sudo tee /etc/hostname
I0120 15:05:22.166123 2137369 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-823768
I0120 15:05:22.166193 2137369 main.go:141] libmachine: (addons-823768) Calling .GetSSHHostname
I0120 15:05:22.169246 2137369 main.go:141] libmachine: (addons-823768) DBG | domain addons-823768 has defined MAC address 52:54:00:25:8d:22 in network mk-addons-823768
I0120 15:05:22.169621 2137369 main.go:141] libmachine: (addons-823768) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:8d:22", ip: ""} in network mk-addons-823768: {Iface:virbr1 ExpiryTime:2025-01-20 16:05:13 +0000 UTC Type:0 Mac:52:54:00:25:8d:22 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:addons-823768 Clientid:01:52:54:00:25:8d:22}
I0120 15:05:22.169659 2137369 main.go:141] libmachine: (addons-823768) DBG | domain addons-823768 has defined IP address 192.168.39.158 and MAC address 52:54:00:25:8d:22 in network mk-addons-823768
I0120 15:05:22.169836 2137369 main.go:141] libmachine: (addons-823768) Calling .GetSSHPort
I0120 15:05:22.170038 2137369 main.go:141] libmachine: (addons-823768) Calling .GetSSHKeyPath
I0120 15:05:22.170186 2137369 main.go:141] libmachine: (addons-823768) Calling .GetSSHKeyPath
I0120 15:05:22.170305 2137369 main.go:141] libmachine: (addons-823768) Calling .GetSSHUsername
I0120 15:05:22.170495 2137369 main.go:141] libmachine: Using SSH client type: native
I0120 15:05:22.170736 2137369 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil> [] 0s} 192.168.39.158 22 <nil> <nil>}
I0120 15:05:22.170762 2137369 main.go:141] libmachine: About to run SSH command:
if ! grep -xq '.*\saddons-823768' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-823768/g' /etc/hosts;
else
echo '127.0.1.1 addons-823768' | sudo tee -a /etc/hosts;
fi
fi
I0120 15:05:22.280555 2137369 main.go:141] libmachine: SSH cmd err, output: <nil>:
I0120 15:05:22.280595 2137369 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20109-2129584/.minikube CaCertPath:/home/jenkins/minikube-integration/20109-2129584/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20109-2129584/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20109-2129584/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20109-2129584/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20109-2129584/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20109-2129584/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20109-2129584/.minikube}
I0120 15:05:22.280622 2137369 buildroot.go:174] setting up certificates
I0120 15:05:22.280638 2137369 provision.go:84] configureAuth start
I0120 15:05:22.280654 2137369 main.go:141] libmachine: (addons-823768) Calling .GetMachineName
I0120 15:05:22.281026 2137369 main.go:141] libmachine: (addons-823768) Calling .GetIP
I0120 15:05:22.283951 2137369 main.go:141] libmachine: (addons-823768) DBG | domain addons-823768 has defined MAC address 52:54:00:25:8d:22 in network mk-addons-823768
I0120 15:05:22.284335 2137369 main.go:141] libmachine: (addons-823768) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:8d:22", ip: ""} in network mk-addons-823768: {Iface:virbr1 ExpiryTime:2025-01-20 16:05:13 +0000 UTC Type:0 Mac:52:54:00:25:8d:22 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:addons-823768 Clientid:01:52:54:00:25:8d:22}
I0120 15:05:22.284358 2137369 main.go:141] libmachine: (addons-823768) DBG | domain addons-823768 has defined IP address 192.168.39.158 and MAC address 52:54:00:25:8d:22 in network mk-addons-823768
I0120 15:05:22.284533 2137369 main.go:141] libmachine: (addons-823768) Calling .GetSSHHostname
I0120 15:05:22.286813 2137369 main.go:141] libmachine: (addons-823768) DBG | domain addons-823768 has defined MAC address 52:54:00:25:8d:22 in network mk-addons-823768
I0120 15:05:22.287192 2137369 main.go:141] libmachine: (addons-823768) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:8d:22", ip: ""} in network mk-addons-823768: {Iface:virbr1 ExpiryTime:2025-01-20 16:05:13 +0000 UTC Type:0 Mac:52:54:00:25:8d:22 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:addons-823768 Clientid:01:52:54:00:25:8d:22}
I0120 15:05:22.287215 2137369 main.go:141] libmachine: (addons-823768) DBG | domain addons-823768 has defined IP address 192.168.39.158 and MAC address 52:54:00:25:8d:22 in network mk-addons-823768
I0120 15:05:22.287344 2137369 provision.go:143] copyHostCerts
I0120 15:05:22.287426 2137369 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20109-2129584/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20109-2129584/.minikube/ca.pem (1082 bytes)
I0120 15:05:22.287580 2137369 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20109-2129584/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20109-2129584/.minikube/cert.pem (1123 bytes)
I0120 15:05:22.287682 2137369 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20109-2129584/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20109-2129584/.minikube/key.pem (1679 bytes)
I0120 15:05:22.287769 2137369 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20109-2129584/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20109-2129584/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20109-2129584/.minikube/certs/ca-key.pem org=jenkins.addons-823768 san=[127.0.0.1 192.168.39.158 addons-823768 localhost minikube]
I0120 15:05:22.401850 2137369 provision.go:177] copyRemoteCerts
I0120 15:05:22.401946 2137369 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I0120 15:05:22.401974 2137369 main.go:141] libmachine: (addons-823768) Calling .GetSSHHostname
I0120 15:05:22.405186 2137369 main.go:141] libmachine: (addons-823768) DBG | domain addons-823768 has defined MAC address 52:54:00:25:8d:22 in network mk-addons-823768
I0120 15:05:22.405681 2137369 main.go:141] libmachine: (addons-823768) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:8d:22", ip: ""} in network mk-addons-823768: {Iface:virbr1 ExpiryTime:2025-01-20 16:05:13 +0000 UTC Type:0 Mac:52:54:00:25:8d:22 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:addons-823768 Clientid:01:52:54:00:25:8d:22}
I0120 15:05:22.405710 2137369 main.go:141] libmachine: (addons-823768) DBG | domain addons-823768 has defined IP address 192.168.39.158 and MAC address 52:54:00:25:8d:22 in network mk-addons-823768
I0120 15:05:22.405977 2137369 main.go:141] libmachine: (addons-823768) Calling .GetSSHPort
I0120 15:05:22.406213 2137369 main.go:141] libmachine: (addons-823768) Calling .GetSSHKeyPath
I0120 15:05:22.406368 2137369 main.go:141] libmachine: (addons-823768) Calling .GetSSHUsername
I0120 15:05:22.406524 2137369 sshutil.go:53] new ssh client: &{IP:192.168.39.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20109-2129584/.minikube/machines/addons-823768/id_rsa Username:docker}
I0120 15:05:22.489134 2137369 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-2129584/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
I0120 15:05:22.514579 2137369 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-2129584/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
I0120 15:05:22.539697 2137369 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-2129584/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
I0120 15:05:22.564884 2137369 provision.go:87] duration metric: took 284.22466ms to configureAuth
I0120 15:05:22.564927 2137369 buildroot.go:189] setting minikube options for container-runtime
I0120 15:05:22.565156 2137369 config.go:182] Loaded profile config "addons-823768": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
I0120 15:05:22.565249 2137369 main.go:141] libmachine: (addons-823768) Calling .GetSSHHostname
I0120 15:05:22.568228 2137369 main.go:141] libmachine: (addons-823768) DBG | domain addons-823768 has defined MAC address 52:54:00:25:8d:22 in network mk-addons-823768
I0120 15:05:22.568661 2137369 main.go:141] libmachine: (addons-823768) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:8d:22", ip: ""} in network mk-addons-823768: {Iface:virbr1 ExpiryTime:2025-01-20 16:05:13 +0000 UTC Type:0 Mac:52:54:00:25:8d:22 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:addons-823768 Clientid:01:52:54:00:25:8d:22}
I0120 15:05:22.568706 2137369 main.go:141] libmachine: (addons-823768) DBG | domain addons-823768 has defined IP address 192.168.39.158 and MAC address 52:54:00:25:8d:22 in network mk-addons-823768
I0120 15:05:22.568801 2137369 main.go:141] libmachine: (addons-823768) Calling .GetSSHPort
I0120 15:05:22.569007 2137369 main.go:141] libmachine: (addons-823768) Calling .GetSSHKeyPath
I0120 15:05:22.569179 2137369 main.go:141] libmachine: (addons-823768) Calling .GetSSHKeyPath
I0120 15:05:22.569341 2137369 main.go:141] libmachine: (addons-823768) Calling .GetSSHUsername
I0120 15:05:22.569501 2137369 main.go:141] libmachine: Using SSH client type: native
I0120 15:05:22.569699 2137369 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil> [] 0s} 192.168.39.158 22 <nil> <nil>}
I0120 15:05:22.569716 2137369 main.go:141] libmachine: About to run SSH command:
sudo mkdir -p /etc/sysconfig && printf %s "
CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
I0120 15:05:22.802503 2137369 main.go:141] libmachine: SSH cmd err, output: <nil>:
CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
I0120 15:05:22.802531 2137369 main.go:141] libmachine: Checking connection to Docker...
I0120 15:05:22.802540 2137369 main.go:141] libmachine: (addons-823768) Calling .GetURL
I0120 15:05:22.803962 2137369 main.go:141] libmachine: (addons-823768) DBG | using libvirt version 6000000
I0120 15:05:22.806234 2137369 main.go:141] libmachine: (addons-823768) DBG | domain addons-823768 has defined MAC address 52:54:00:25:8d:22 in network mk-addons-823768
I0120 15:05:22.806594 2137369 main.go:141] libmachine: (addons-823768) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:8d:22", ip: ""} in network mk-addons-823768: {Iface:virbr1 ExpiryTime:2025-01-20 16:05:13 +0000 UTC Type:0 Mac:52:54:00:25:8d:22 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:addons-823768 Clientid:01:52:54:00:25:8d:22}
I0120 15:05:22.806655 2137369 main.go:141] libmachine: (addons-823768) DBG | domain addons-823768 has defined IP address 192.168.39.158 and MAC address 52:54:00:25:8d:22 in network mk-addons-823768
I0120 15:05:22.806814 2137369 main.go:141] libmachine: Docker is up and running!
I0120 15:05:22.806829 2137369 main.go:141] libmachine: Reticulating splines...
I0120 15:05:22.806837 2137369 client.go:171] duration metric: took 25.088726295s to LocalClient.Create
I0120 15:05:22.806864 2137369 start.go:167] duration metric: took 25.088792622s to libmachine.API.Create "addons-823768"
I0120 15:05:22.806874 2137369 start.go:293] postStartSetup for "addons-823768" (driver="kvm2")
I0120 15:05:22.806886 2137369 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0120 15:05:22.806906 2137369 main.go:141] libmachine: (addons-823768) Calling .DriverName
I0120 15:05:22.807197 2137369 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0120 15:05:22.807222 2137369 main.go:141] libmachine: (addons-823768) Calling .GetSSHHostname
I0120 15:05:22.809507 2137369 main.go:141] libmachine: (addons-823768) DBG | domain addons-823768 has defined MAC address 52:54:00:25:8d:22 in network mk-addons-823768
I0120 15:05:22.809856 2137369 main.go:141] libmachine: (addons-823768) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:8d:22", ip: ""} in network mk-addons-823768: {Iface:virbr1 ExpiryTime:2025-01-20 16:05:13 +0000 UTC Type:0 Mac:52:54:00:25:8d:22 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:addons-823768 Clientid:01:52:54:00:25:8d:22}
I0120 15:05:22.809877 2137369 main.go:141] libmachine: (addons-823768) DBG | domain addons-823768 has defined IP address 192.168.39.158 and MAC address 52:54:00:25:8d:22 in network mk-addons-823768
I0120 15:05:22.810074 2137369 main.go:141] libmachine: (addons-823768) Calling .GetSSHPort
I0120 15:05:22.810283 2137369 main.go:141] libmachine: (addons-823768) Calling .GetSSHKeyPath
I0120 15:05:22.810491 2137369 main.go:141] libmachine: (addons-823768) Calling .GetSSHUsername
I0120 15:05:22.810686 2137369 sshutil.go:53] new ssh client: &{IP:192.168.39.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20109-2129584/.minikube/machines/addons-823768/id_rsa Username:docker}
I0120 15:05:22.893410 2137369 ssh_runner.go:195] Run: cat /etc/os-release
I0120 15:05:22.897799 2137369 info.go:137] Remote host: Buildroot 2023.02.9
I0120 15:05:22.897835 2137369 filesync.go:126] Scanning /home/jenkins/minikube-integration/20109-2129584/.minikube/addons for local assets ...
I0120 15:05:22.897908 2137369 filesync.go:126] Scanning /home/jenkins/minikube-integration/20109-2129584/.minikube/files for local assets ...
I0120 15:05:22.897935 2137369 start.go:296] duration metric: took 91.053195ms for postStartSetup
I0120 15:05:22.897999 2137369 main.go:141] libmachine: (addons-823768) Calling .GetConfigRaw
I0120 15:05:22.898651 2137369 main.go:141] libmachine: (addons-823768) Calling .GetIP
I0120 15:05:22.902713 2137369 main.go:141] libmachine: (addons-823768) DBG | domain addons-823768 has defined MAC address 52:54:00:25:8d:22 in network mk-addons-823768
I0120 15:05:22.903149 2137369 main.go:141] libmachine: (addons-823768) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:8d:22", ip: ""} in network mk-addons-823768: {Iface:virbr1 ExpiryTime:2025-01-20 16:05:13 +0000 UTC Type:0 Mac:52:54:00:25:8d:22 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:addons-823768 Clientid:01:52:54:00:25:8d:22}
I0120 15:05:22.903182 2137369 main.go:141] libmachine: (addons-823768) DBG | domain addons-823768 has defined IP address 192.168.39.158 and MAC address 52:54:00:25:8d:22 in network mk-addons-823768
I0120 15:05:22.903416 2137369 profile.go:143] Saving config to /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/addons-823768/config.json ...
I0120 15:05:22.903615 2137369 start.go:128] duration metric: took 25.205644985s to createHost
I0120 15:05:22.903638 2137369 main.go:141] libmachine: (addons-823768) Calling .GetSSHHostname
I0120 15:05:22.905563 2137369 main.go:141] libmachine: (addons-823768) DBG | domain addons-823768 has defined MAC address 52:54:00:25:8d:22 in network mk-addons-823768
I0120 15:05:22.905853 2137369 main.go:141] libmachine: (addons-823768) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:8d:22", ip: ""} in network mk-addons-823768: {Iface:virbr1 ExpiryTime:2025-01-20 16:05:13 +0000 UTC Type:0 Mac:52:54:00:25:8d:22 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:addons-823768 Clientid:01:52:54:00:25:8d:22}
I0120 15:05:22.905900 2137369 main.go:141] libmachine: (addons-823768) DBG | domain addons-823768 has defined IP address 192.168.39.158 and MAC address 52:54:00:25:8d:22 in network mk-addons-823768
I0120 15:05:22.905949 2137369 main.go:141] libmachine: (addons-823768) Calling .GetSSHPort
I0120 15:05:22.906149 2137369 main.go:141] libmachine: (addons-823768) Calling .GetSSHKeyPath
I0120 15:05:22.906296 2137369 main.go:141] libmachine: (addons-823768) Calling .GetSSHKeyPath
I0120 15:05:22.906429 2137369 main.go:141] libmachine: (addons-823768) Calling .GetSSHUsername
I0120 15:05:22.906664 2137369 main.go:141] libmachine: Using SSH client type: native
I0120 15:05:22.906868 2137369 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8641c0] 0x866ea0 <nil> [] 0s} 192.168.39.158 22 <nil> <nil>}
I0120 15:05:22.906880 2137369 main.go:141] libmachine: About to run SSH command:
date +%s.%N
I0120 15:05:23.008106 2137369 main.go:141] libmachine: SSH cmd err, output: <nil>: 1737385522.980460970
I0120 15:05:23.008135 2137369 fix.go:216] guest clock: 1737385522.980460970
I0120 15:05:23.008143 2137369 fix.go:229] Guest: 2025-01-20 15:05:22.98046097 +0000 UTC Remote: 2025-01-20 15:05:22.903626964 +0000 UTC m=+25.320898969 (delta=76.834006ms)
I0120 15:05:23.008215 2137369 fix.go:200] guest clock delta is within tolerance: 76.834006ms
I0120 15:05:23.008230 2137369 start.go:83] releasing machines lock for "addons-823768", held for 25.310337319s
I0120 15:05:23.008265 2137369 main.go:141] libmachine: (addons-823768) Calling .DriverName
I0120 15:05:23.008613 2137369 main.go:141] libmachine: (addons-823768) Calling .GetIP
I0120 15:05:23.011490 2137369 main.go:141] libmachine: (addons-823768) DBG | domain addons-823768 has defined MAC address 52:54:00:25:8d:22 in network mk-addons-823768
I0120 15:05:23.011849 2137369 main.go:141] libmachine: (addons-823768) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:8d:22", ip: ""} in network mk-addons-823768: {Iface:virbr1 ExpiryTime:2025-01-20 16:05:13 +0000 UTC Type:0 Mac:52:54:00:25:8d:22 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:addons-823768 Clientid:01:52:54:00:25:8d:22}
I0120 15:05:23.011878 2137369 main.go:141] libmachine: (addons-823768) DBG | domain addons-823768 has defined IP address 192.168.39.158 and MAC address 52:54:00:25:8d:22 in network mk-addons-823768
I0120 15:05:23.012093 2137369 main.go:141] libmachine: (addons-823768) Calling .DriverName
I0120 15:05:23.012681 2137369 main.go:141] libmachine: (addons-823768) Calling .DriverName
I0120 15:05:23.012869 2137369 main.go:141] libmachine: (addons-823768) Calling .DriverName
I0120 15:05:23.012984 2137369 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
I0120 15:05:23.013034 2137369 main.go:141] libmachine: (addons-823768) Calling .GetSSHHostname
I0120 15:05:23.013163 2137369 ssh_runner.go:195] Run: cat /version.json
I0120 15:05:23.013186 2137369 main.go:141] libmachine: (addons-823768) Calling .GetSSHHostname
I0120 15:05:23.015959 2137369 main.go:141] libmachine: (addons-823768) DBG | domain addons-823768 has defined MAC address 52:54:00:25:8d:22 in network mk-addons-823768
I0120 15:05:23.016170 2137369 main.go:141] libmachine: (addons-823768) DBG | domain addons-823768 has defined MAC address 52:54:00:25:8d:22 in network mk-addons-823768
I0120 15:05:23.016408 2137369 main.go:141] libmachine: (addons-823768) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:8d:22", ip: ""} in network mk-addons-823768: {Iface:virbr1 ExpiryTime:2025-01-20 16:05:13 +0000 UTC Type:0 Mac:52:54:00:25:8d:22 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:addons-823768 Clientid:01:52:54:00:25:8d:22}
I0120 15:05:23.016434 2137369 main.go:141] libmachine: (addons-823768) DBG | domain addons-823768 has defined IP address 192.168.39.158 and MAC address 52:54:00:25:8d:22 in network mk-addons-823768
I0120 15:05:23.016609 2137369 main.go:141] libmachine: (addons-823768) Calling .GetSSHPort
I0120 15:05:23.016700 2137369 main.go:141] libmachine: (addons-823768) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:8d:22", ip: ""} in network mk-addons-823768: {Iface:virbr1 ExpiryTime:2025-01-20 16:05:13 +0000 UTC Type:0 Mac:52:54:00:25:8d:22 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:addons-823768 Clientid:01:52:54:00:25:8d:22}
I0120 15:05:23.016732 2137369 main.go:141] libmachine: (addons-823768) DBG | domain addons-823768 has defined IP address 192.168.39.158 and MAC address 52:54:00:25:8d:22 in network mk-addons-823768
I0120 15:05:23.016845 2137369 main.go:141] libmachine: (addons-823768) Calling .GetSSHKeyPath
I0120 15:05:23.016912 2137369 main.go:141] libmachine: (addons-823768) Calling .GetSSHPort
I0120 15:05:23.016984 2137369 main.go:141] libmachine: (addons-823768) Calling .GetSSHUsername
I0120 15:05:23.017055 2137369 main.go:141] libmachine: (addons-823768) Calling .GetSSHKeyPath
I0120 15:05:23.017119 2137369 sshutil.go:53] new ssh client: &{IP:192.168.39.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20109-2129584/.minikube/machines/addons-823768/id_rsa Username:docker}
I0120 15:05:23.017164 2137369 main.go:141] libmachine: (addons-823768) Calling .GetSSHUsername
I0120 15:05:23.017332 2137369 sshutil.go:53] new ssh client: &{IP:192.168.39.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20109-2129584/.minikube/machines/addons-823768/id_rsa Username:docker}
I0120 15:05:23.091913 2137369 ssh_runner.go:195] Run: systemctl --version
I0120 15:05:23.122269 2137369 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
I0120 15:05:23.875612 2137369 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
W0120 15:05:23.882266 2137369 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
I0120 15:05:23.882347 2137369 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
I0120 15:05:23.900478 2137369 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
I0120 15:05:23.900506 2137369 start.go:495] detecting cgroup driver to use...
I0120 15:05:23.900575 2137369 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
I0120 15:05:23.918752 2137369 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
I0120 15:05:23.934434 2137369 docker.go:217] disabling cri-docker service (if available) ...
I0120 15:05:23.934503 2137369 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
I0120 15:05:23.948970 2137369 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
I0120 15:05:23.963860 2137369 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
I0120 15:05:24.085254 2137369 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
I0120 15:05:24.229859 2137369 docker.go:233] disabling docker service ...
I0120 15:05:24.229956 2137369 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
I0120 15:05:24.245938 2137369 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
I0120 15:05:24.260809 2137369 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
I0120 15:05:24.396969 2137369 ssh_runner.go:195] Run: sudo systemctl mask docker.service
I0120 15:05:24.518925 2137369 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
I0120 15:05:24.534100 2137369 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
" | sudo tee /etc/crictl.yaml"
I0120 15:05:24.553792 2137369 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
I0120 15:05:24.553860 2137369 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
I0120 15:05:24.565579 2137369 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
I0120 15:05:24.565658 2137369 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
I0120 15:05:24.577482 2137369 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
I0120 15:05:24.589471 2137369 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
I0120 15:05:24.601410 2137369 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I0120 15:05:24.613467 2137369 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
I0120 15:05:24.624780 2137369 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
I0120 15:05:24.643556 2137369 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
I0120 15:05:24.655973 2137369 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I0120 15:05:24.666889 2137369 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
stdout:
stderr:
sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
I0120 15:05:24.666993 2137369 ssh_runner.go:195] Run: sudo modprobe br_netfilter
I0120 15:05:24.681872 2137369 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I0120 15:05:24.692833 2137369 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0120 15:05:24.816424 2137369 ssh_runner.go:195] Run: sudo systemctl restart crio
I0120 15:05:24.916890 2137369 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
I0120 15:05:24.917033 2137369 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
I0120 15:05:24.922124 2137369 start.go:563] Will wait 60s for crictl version
I0120 15:05:24.922223 2137369 ssh_runner.go:195] Run: which crictl
I0120 15:05:24.926492 2137369 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
I0120 15:05:24.966056 2137369 start.go:579] Version: 0.1.0
RuntimeName: cri-o
RuntimeVersion: 1.29.1
RuntimeApiVersion: v1
I0120 15:05:24.966165 2137369 ssh_runner.go:195] Run: crio --version
I0120 15:05:25.000470 2137369 ssh_runner.go:195] Run: crio --version
I0120 15:05:25.032126 2137369 out.go:177] * Preparing Kubernetes v1.32.0 on CRI-O 1.29.1 ...
I0120 15:05:25.033657 2137369 main.go:141] libmachine: (addons-823768) Calling .GetIP
I0120 15:05:25.036578 2137369 main.go:141] libmachine: (addons-823768) DBG | domain addons-823768 has defined MAC address 52:54:00:25:8d:22 in network mk-addons-823768
I0120 15:05:25.037003 2137369 main.go:141] libmachine: (addons-823768) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:8d:22", ip: ""} in network mk-addons-823768: {Iface:virbr1 ExpiryTime:2025-01-20 16:05:13 +0000 UTC Type:0 Mac:52:54:00:25:8d:22 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:addons-823768 Clientid:01:52:54:00:25:8d:22}
I0120 15:05:25.037039 2137369 main.go:141] libmachine: (addons-823768) DBG | domain addons-823768 has defined IP address 192.168.39.158 and MAC address 52:54:00:25:8d:22 in network mk-addons-823768
I0120 15:05:25.037400 2137369 ssh_runner.go:195] Run: grep 192.168.39.1 host.minikube.internal$ /etc/hosts
I0120 15:05:25.042011 2137369 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0120 15:05:25.055574 2137369 kubeadm.go:883] updating cluster {Name:addons-823768 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:addons-823768 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.158 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizati
ons:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
I0120 15:05:25.055706 2137369 preload.go:131] Checking if preload exists for k8s version v1.32.0 and runtime crio
I0120 15:05:25.055752 2137369 ssh_runner.go:195] Run: sudo crictl images --output json
I0120 15:05:25.092416 2137369 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.32.0". assuming images are not preloaded.
I0120 15:05:25.092490 2137369 ssh_runner.go:195] Run: which lz4
I0120 15:05:25.096985 2137369 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
I0120 15:05:25.101643 2137369 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
stdout:
stderr:
stat: cannot statx '/preloaded.tar.lz4': No such file or directory
I0120 15:05:25.101687 2137369 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-2129584/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (398646650 bytes)
I0120 15:05:26.559521 2137369 crio.go:462] duration metric: took 1.462632814s to copy over tarball
I0120 15:05:26.559603 2137369 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
I0120 15:05:28.881265 2137369 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.321627399s)
I0120 15:05:28.881296 2137369 crio.go:469] duration metric: took 2.321738568s to extract the tarball
I0120 15:05:28.881308 2137369 ssh_runner.go:146] rm: /preloaded.tar.lz4
I0120 15:05:28.923957 2137369 ssh_runner.go:195] Run: sudo crictl images --output json
I0120 15:05:28.966345 2137369 crio.go:514] all images are preloaded for cri-o runtime.
I0120 15:05:28.966375 2137369 cache_images.go:84] Images are preloaded, skipping loading
I0120 15:05:28.966384 2137369 kubeadm.go:934] updating node { 192.168.39.158 8443 v1.32.0 crio true true} ...
I0120 15:05:28.966505 2137369 kubeadm.go:946] kubelet [Unit]
Wants=crio.service
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.32.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-823768 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.158
[Install]
config:
{KubernetesVersion:v1.32.0 ClusterName:addons-823768 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
I0120 15:05:28.966576 2137369 ssh_runner.go:195] Run: crio config
I0120 15:05:29.027026 2137369 cni.go:84] Creating CNI manager for ""
I0120 15:05:29.027056 2137369 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
I0120 15:05:29.027070 2137369 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
I0120 15:05:29.027106 2137369 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.158 APIServerPort:8443 KubernetesVersion:v1.32.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-823768 NodeName:addons-823768 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.158"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.158 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
I0120 15:05:29.027278 2137369 kubeadm.go:195] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta4
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.39.158
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: unix:///var/run/crio/crio.sock
name: "addons-823768"
kubeletExtraArgs:
- name: "node-ip"
value: "192.168.39.158"
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta4
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.168.39.158"]
extraArgs:
- name: "enable-admission-plugins"
value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
- name: "allocate-node-cidrs"
value: "true"
- name: "leader-elect"
value: "false"
scheduler:
extraArgs:
- name: "leader-elect"
value: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
etcd:
local:
dataDir: /var/lib/minikube/etcd
extraArgs:
- name: "proxy-refresh-interval"
value: "70000"
kubernetesVersion: v1.32.0
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: cgroupfs
containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
hairpinMode: hairpin-veth
runtimeRequestTimeout: 15m
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%"
nodefs.inodesFree: "0%"
imagefs.available: "0%"
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
tcpCloseWaitTimeout: 0s
I0120 15:05:29.027360 2137369 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.0
I0120 15:05:29.038001 2137369 binaries.go:44] Found k8s binaries, skipping transfer
I0120 15:05:29.038070 2137369 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I0120 15:05:29.048357 2137369 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
I0120 15:05:29.066394 2137369 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
I0120 15:05:29.083817 2137369 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2293 bytes)
I0120 15:05:29.101973 2137369 ssh_runner.go:195] Run: grep 192.168.39.158 control-plane.minikube.internal$ /etc/hosts
I0120 15:05:29.106193 2137369 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.158 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0120 15:05:29.119610 2137369 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0120 15:05:29.229096 2137369 ssh_runner.go:195] Run: sudo systemctl start kubelet
I0120 15:05:29.247908 2137369 certs.go:68] Setting up /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/addons-823768 for IP: 192.168.39.158
I0120 15:05:29.247938 2137369 certs.go:194] generating shared ca certs ...
I0120 15:05:29.247962 2137369 certs.go:226] acquiring lock for ca certs: {Name:mk84252bd5600698fafde6d96c5306f1543c8a40 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0120 15:05:29.248133 2137369 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/20109-2129584/.minikube/ca.key
I0120 15:05:29.375528 2137369 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20109-2129584/.minikube/ca.crt ...
I0120 15:05:29.375570 2137369 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20109-2129584/.minikube/ca.crt: {Name:mk95237ca492d6a8873dc0ee527d241251260641 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0120 15:05:29.375788 2137369 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20109-2129584/.minikube/ca.key ...
I0120 15:05:29.375806 2137369 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20109-2129584/.minikube/ca.key: {Name:mk2a2005e42e379cc392095c3323349ceaba77a1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0120 15:05:29.375924 2137369 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20109-2129584/.minikube/proxy-client-ca.key
I0120 15:05:29.506135 2137369 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20109-2129584/.minikube/proxy-client-ca.crt ...
I0120 15:05:29.506170 2137369 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20109-2129584/.minikube/proxy-client-ca.crt: {Name:mkbf86178b27c05eca2541aa5684eb4efb701b91 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0120 15:05:29.506350 2137369 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20109-2129584/.minikube/proxy-client-ca.key ...
I0120 15:05:29.506366 2137369 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20109-2129584/.minikube/proxy-client-ca.key: {Name:mk482675847c9e92b5693c4a036fdcbdd07762af Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0120 15:05:29.506469 2137369 certs.go:256] generating profile certs ...
I0120 15:05:29.506569 2137369 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/addons-823768/client.key
I0120 15:05:29.506591 2137369 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/addons-823768/client.crt with IP's: []
I0120 15:05:29.632374 2137369 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/addons-823768/client.crt ...
I0120 15:05:29.632424 2137369 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/addons-823768/client.crt: {Name:mk3520768cf7dae31823de6f71890b04241d6376 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0120 15:05:29.632615 2137369 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/addons-823768/client.key ...
I0120 15:05:29.632631 2137369 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/addons-823768/client.key: {Name:mk76119af4a5a356e887e3134370f7dc46e58fde Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0120 15:05:29.632737 2137369 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/addons-823768/apiserver.key.99cba5f5
I0120 15:05:29.632764 2137369 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/addons-823768/apiserver.crt.99cba5f5 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.158]
I0120 15:05:29.770493 2137369 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/addons-823768/apiserver.crt.99cba5f5 ...
I0120 15:05:29.770531 2137369 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/addons-823768/apiserver.crt.99cba5f5: {Name:mk9454cdba7b3006624e137f0bfa7b68d0d57860 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0120 15:05:29.770726 2137369 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/addons-823768/apiserver.key.99cba5f5 ...
I0120 15:05:29.770744 2137369 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/addons-823768/apiserver.key.99cba5f5: {Name:mkb77c90195774352d1df405073394964b639a07 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0120 15:05:29.770848 2137369 certs.go:381] copying /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/addons-823768/apiserver.crt.99cba5f5 -> /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/addons-823768/apiserver.crt
I0120 15:05:29.770966 2137369 certs.go:385] copying /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/addons-823768/apiserver.key.99cba5f5 -> /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/addons-823768/apiserver.key
I0120 15:05:29.771058 2137369 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/addons-823768/proxy-client.key
I0120 15:05:29.771088 2137369 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/addons-823768/proxy-client.crt with IP's: []
I0120 15:05:29.886204 2137369 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/addons-823768/proxy-client.crt ...
I0120 15:05:29.886243 2137369 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/addons-823768/proxy-client.crt: {Name:mka25cfa7c2ede2de31741302e198a7540947810 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0120 15:05:29.886431 2137369 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/addons-823768/proxy-client.key ...
I0120 15:05:29.886449 2137369 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/addons-823768/proxy-client.key: {Name:mk8d45c04d3d1bcd97c6423c1861ad369ae8c86b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0120 15:05:29.886681 2137369 certs.go:484] found cert: /home/jenkins/minikube-integration/20109-2129584/.minikube/certs/ca-key.pem (1675 bytes)
I0120 15:05:29.886732 2137369 certs.go:484] found cert: /home/jenkins/minikube-integration/20109-2129584/.minikube/certs/ca.pem (1082 bytes)
I0120 15:05:29.886764 2137369 certs.go:484] found cert: /home/jenkins/minikube-integration/20109-2129584/.minikube/certs/cert.pem (1123 bytes)
I0120 15:05:29.886800 2137369 certs.go:484] found cert: /home/jenkins/minikube-integration/20109-2129584/.minikube/certs/key.pem (1679 bytes)
I0120 15:05:29.887529 2137369 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-2129584/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I0120 15:05:29.920958 2137369 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-2129584/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
I0120 15:05:29.961136 2137369 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-2129584/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I0120 15:05:29.991787 2137369 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-2129584/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
I0120 15:05:30.017520 2137369 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/addons-823768/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
I0120 15:05:30.042540 2137369 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/addons-823768/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
I0120 15:05:30.067826 2137369 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/addons-823768/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I0120 15:05:30.093111 2137369 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-2129584/.minikube/profiles/addons-823768/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
I0120 15:05:30.120801 2137369 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20109-2129584/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I0120 15:05:30.145867 2137369 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
I0120 15:05:30.163291 2137369 ssh_runner.go:195] Run: openssl version
I0120 15:05:30.169332 2137369 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I0120 15:05:30.180684 2137369 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I0120 15:05:30.185690 2137369 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 20 15:05 /usr/share/ca-certificates/minikubeCA.pem
I0120 15:05:30.185771 2137369 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I0120 15:05:30.192059 2137369 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I0120 15:05:30.203678 2137369 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
I0120 15:05:30.208221 2137369 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
stdout:
stderr:
stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
I0120 15:05:30.208309 2137369 kubeadm.go:392] StartCluster: {Name:addons-823768 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:addons-823768 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.158 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations
:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0120 15:05:30.208405 2137369 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
I0120 15:05:30.208469 2137369 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
I0120 15:05:30.247022 2137369 cri.go:89] found id: ""
I0120 15:05:30.247118 2137369 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I0120 15:05:30.257748 2137369 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I0120 15:05:30.268149 2137369 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I0120 15:05:30.279855 2137369 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I0120 15:05:30.279881 2137369 kubeadm.go:157] found existing configuration files:
I0120 15:05:30.279930 2137369 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
I0120 15:05:30.290146 2137369 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/admin.conf: No such file or directory
I0120 15:05:30.290227 2137369 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
I0120 15:05:30.300670 2137369 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
I0120 15:05:30.310440 2137369 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/kubelet.conf: No such file or directory
I0120 15:05:30.310509 2137369 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
I0120 15:05:30.320924 2137369 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
I0120 15:05:30.330490 2137369 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/controller-manager.conf: No such file or directory
I0120 15:05:30.330568 2137369 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
I0120 15:05:30.340525 2137369 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
I0120 15:05:30.350412 2137369 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/scheduler.conf: No such file or directory
I0120 15:05:30.350475 2137369 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
I0120 15:05:30.360454 2137369 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
I0120 15:05:30.416929 2137369 kubeadm.go:310] [init] Using Kubernetes version: v1.32.0
I0120 15:05:30.417044 2137369 kubeadm.go:310] [preflight] Running pre-flight checks
I0120 15:05:30.518614 2137369 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
I0120 15:05:30.518741 2137369 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
I0120 15:05:30.518916 2137369 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
I0120 15:05:30.540333 2137369 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
I0120 15:05:30.586140 2137369 out.go:235] - Generating certificates and keys ...
I0120 15:05:30.586320 2137369 kubeadm.go:310] [certs] Using existing ca certificate authority
I0120 15:05:30.586423 2137369 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
I0120 15:05:30.724586 2137369 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
I0120 15:05:30.825694 2137369 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
I0120 15:05:30.938774 2137369 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
I0120 15:05:31.384157 2137369 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
I0120 15:05:31.450833 2137369 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
I0120 15:05:31.451192 2137369 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-823768 localhost] and IPs [192.168.39.158 127.0.0.1 ::1]
I0120 15:05:31.753678 2137369 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
I0120 15:05:31.753966 2137369 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-823768 localhost] and IPs [192.168.39.158 127.0.0.1 ::1]
I0120 15:05:31.832258 2137369 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
I0120 15:05:32.352824 2137369 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
I0120 15:05:32.512677 2137369 kubeadm.go:310] [certs] Generating "sa" key and public key
I0120 15:05:32.512862 2137369 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
I0120 15:05:32.737640 2137369 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
I0120 15:05:32.934895 2137369 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
I0120 15:05:33.168194 2137369 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
I0120 15:05:33.369097 2137369 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
I0120 15:05:33.571513 2137369 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
I0120 15:05:33.572224 2137369 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
I0120 15:05:33.577165 2137369 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
I0120 15:05:33.579006 2137369 out.go:235] - Booting up control plane ...
I0120 15:05:33.579145 2137369 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
I0120 15:05:33.579230 2137369 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
I0120 15:05:33.579530 2137369 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
I0120 15:05:33.595480 2137369 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
I0120 15:05:33.603182 2137369 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
I0120 15:05:33.603401 2137369 kubeadm.go:310] [kubelet-start] Starting the kubelet
I0120 15:05:33.728727 2137369 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
I0120 15:05:33.728864 2137369 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
I0120 15:05:34.245972 2137369 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 517.580806ms
I0120 15:05:34.246087 2137369 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
I0120 15:05:39.244154 2137369 kubeadm.go:310] [api-check] The API server is healthy after 5.00149055s
I0120 15:05:39.266303 2137369 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
I0120 15:05:39.287362 2137369 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
I0120 15:05:39.321758 2137369 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
I0120 15:05:39.321956 2137369 kubeadm.go:310] [mark-control-plane] Marking the node addons-823768 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
I0120 15:05:39.340511 2137369 kubeadm.go:310] [bootstrap-token] Using token: ctmxn9.z3jofwz9r9zooxkk
I0120 15:05:39.342300 2137369 out.go:235] - Configuring RBAC rules ...
I0120 15:05:39.342426 2137369 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
I0120 15:05:39.356885 2137369 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
I0120 15:05:39.374011 2137369 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
I0120 15:05:39.378696 2137369 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
I0120 15:05:39.383011 2137369 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
I0120 15:05:39.388592 2137369 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
I0120 15:05:39.650709 2137369 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
I0120 15:05:40.082837 2137369 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
I0120 15:05:40.650481 2137369 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
I0120 15:05:40.651438 2137369 kubeadm.go:310]
I0120 15:05:40.651502 2137369 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
I0120 15:05:40.651508 2137369 kubeadm.go:310]
I0120 15:05:40.651580 2137369 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
I0120 15:05:40.651588 2137369 kubeadm.go:310]
I0120 15:05:40.651645 2137369 kubeadm.go:310] mkdir -p $HOME/.kube
I0120 15:05:40.651750 2137369 kubeadm.go:310] sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
I0120 15:05:40.651833 2137369 kubeadm.go:310] sudo chown $(id -u):$(id -g) $HOME/.kube/config
I0120 15:05:40.651857 2137369 kubeadm.go:310]
I0120 15:05:40.651920 2137369 kubeadm.go:310] Alternatively, if you are the root user, you can run:
I0120 15:05:40.651928 2137369 kubeadm.go:310]
I0120 15:05:40.651964 2137369 kubeadm.go:310] export KUBECONFIG=/etc/kubernetes/admin.conf
I0120 15:05:40.651970 2137369 kubeadm.go:310]
I0120 15:05:40.652010 2137369 kubeadm.go:310] You should now deploy a pod network to the cluster.
I0120 15:05:40.652095 2137369 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
I0120 15:05:40.652198 2137369 kubeadm.go:310] https://kubernetes.io/docs/concepts/cluster-administration/addons/
I0120 15:05:40.652208 2137369 kubeadm.go:310]
I0120 15:05:40.652305 2137369 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
I0120 15:05:40.652415 2137369 kubeadm.go:310] and service account keys on each node and then running the following as root:
I0120 15:05:40.652426 2137369 kubeadm.go:310]
I0120 15:05:40.652542 2137369 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token ctmxn9.z3jofwz9r9zooxkk \
I0120 15:05:40.652709 2137369 kubeadm.go:310] --discovery-token-ca-cert-hash sha256:c893fbc5bf142eea8522fddada004b7924f431a7feeb719562411af28ded5a23 \
I0120 15:05:40.652749 2137369 kubeadm.go:310] --control-plane
I0120 15:05:40.652769 2137369 kubeadm.go:310]
I0120 15:05:40.652869 2137369 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
I0120 15:05:40.652877 2137369 kubeadm.go:310]
I0120 15:05:40.652965 2137369 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token ctmxn9.z3jofwz9r9zooxkk \
I0120 15:05:40.653092 2137369 kubeadm.go:310] --discovery-token-ca-cert-hash sha256:c893fbc5bf142eea8522fddada004b7924f431a7feeb719562411af28ded5a23
I0120 15:05:40.653919 2137369 kubeadm.go:310] [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
I0120 15:05:40.653957 2137369 cni.go:84] Creating CNI manager for ""
I0120 15:05:40.653968 2137369 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
I0120 15:05:40.655707 2137369 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
I0120 15:05:40.657014 2137369 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
I0120 15:05:40.669371 2137369 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
I0120 15:05:40.690666 2137369 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
I0120 15:05:40.690750 2137369 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
I0120 15:05:40.690763 2137369 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-823768 minikube.k8s.io/updated_at=2025_01_20T15_05_40_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=5361cb60dc81b84464882b386f50211c10a5a7cc minikube.k8s.io/name=addons-823768 minikube.k8s.io/primary=true
I0120 15:05:40.817437 2137369 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0120 15:05:40.860939 2137369 ops.go:34] apiserver oom_adj: -16
I0120 15:05:41.317594 2137369 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0120 15:05:41.818178 2137369 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0120 15:05:42.318320 2137369 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0120 15:05:42.818281 2137369 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0120 15:05:43.318223 2137369 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0120 15:05:43.818194 2137369 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0120 15:05:44.317755 2137369 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0120 15:05:44.817685 2137369 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0120 15:05:44.939850 2137369 kubeadm.go:1113] duration metric: took 4.249182583s to wait for elevateKubeSystemPrivileges
I0120 15:05:44.939901 2137369 kubeadm.go:394] duration metric: took 14.731620646s to StartCluster
I0120 15:05:44.939931 2137369 settings.go:142] acquiring lock: {Name:mk010ddf0f1361412fc75061b65d81e7c6d4228f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0120 15:05:44.940095 2137369 settings.go:150] Updating kubeconfig: /home/jenkins/minikube-integration/20109-2129584/kubeconfig
I0120 15:05:44.940664 2137369 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20109-2129584/kubeconfig: {Name:mk62c2ba85f28ab2593bf865f84dacdd345c5504 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0120 15:05:44.940924 2137369 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
I0120 15:05:44.940960 2137369 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.158 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}
I0120 15:05:44.941029 2137369 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
I0120 15:05:44.941156 2137369 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-823768"
I0120 15:05:44.941168 2137369 addons.go:69] Setting default-storageclass=true in profile "addons-823768"
I0120 15:05:44.941185 2137369 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-823768"
I0120 15:05:44.941225 2137369 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-823768"
I0120 15:05:44.941240 2137369 config.go:182] Loaded profile config "addons-823768": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
I0120 15:05:44.941236 2137369 addons.go:69] Setting amd-gpu-device-plugin=true in profile "addons-823768"
I0120 15:05:44.941261 2137369 addons.go:238] Setting addon nvidia-device-plugin=true in "addons-823768"
I0120 15:05:44.941263 2137369 addons.go:69] Setting ingress-dns=true in profile "addons-823768"
I0120 15:05:44.941263 2137369 addons.go:69] Setting storage-provisioner=true in profile "addons-823768"
I0120 15:05:44.941268 2137369 addons.go:238] Setting addon amd-gpu-device-plugin=true in "addons-823768"
I0120 15:05:44.941235 2137369 addons.go:238] Setting addon csi-hostpath-driver=true in "addons-823768"
I0120 15:05:44.941309 2137369 addons.go:69] Setting gcp-auth=true in profile "addons-823768"
I0120 15:05:44.941312 2137369 host.go:66] Checking if "addons-823768" exists ...
I0120 15:05:44.941245 2137369 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-823768"
I0120 15:05:44.941326 2137369 addons.go:69] Setting volcano=true in profile "addons-823768"
I0120 15:05:44.941335 2137369 host.go:66] Checking if "addons-823768" exists ...
I0120 15:05:44.941341 2137369 addons.go:238] Setting addon volcano=true in "addons-823768"
I0120 15:05:44.941340 2137369 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-823768"
I0120 15:05:44.941351 2137369 mustload.go:65] Loading cluster: addons-823768
I0120 15:05:44.941370 2137369 host.go:66] Checking if "addons-823768" exists ...
I0120 15:05:44.941519 2137369 config.go:182] Loaded profile config "addons-823768": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
I0120 15:05:44.941724 2137369 addons.go:69] Setting volumesnapshots=true in profile "addons-823768"
I0120 15:05:44.941738 2137369 addons.go:238] Setting addon volumesnapshots=true in "addons-823768"
I0120 15:05:44.941738 2137369 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0120 15:05:44.941760 2137369 host.go:66] Checking if "addons-823768" exists ...
I0120 15:05:44.941762 2137369 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0120 15:05:44.941764 2137369 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0120 15:05:44.941775 2137369 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0120 15:05:44.941254 2137369 addons.go:69] Setting registry=true in profile "addons-823768"
I0120 15:05:44.941801 2137369 main.go:141] libmachine: Launching plugin server for driver kvm2
I0120 15:05:44.941316 2137369 host.go:66] Checking if "addons-823768" exists ...
I0120 15:05:44.941808 2137369 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0120 15:05:44.941818 2137369 main.go:141] libmachine: Launching plugin server for driver kvm2
I0120 15:05:44.941845 2137369 main.go:141] libmachine: Launching plugin server for driver kvm2
I0120 15:05:44.941894 2137369 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0120 15:05:44.941803 2137369 addons.go:238] Setting addon registry=true in "addons-823768"
I0120 15:05:44.941238 2137369 addons.go:69] Setting cloud-spanner=true in profile "addons-823768"
I0120 15:05:44.941921 2137369 main.go:141] libmachine: Launching plugin server for driver kvm2
I0120 15:05:44.941927 2137369 addons.go:238] Setting addon cloud-spanner=true in "addons-823768"
I0120 15:05:44.941803 2137369 main.go:141] libmachine: Launching plugin server for driver kvm2
I0120 15:05:44.941300 2137369 addons.go:238] Setting addon ingress-dns=true in "addons-823768"
I0120 15:05:44.941312 2137369 addons.go:238] Setting addon storage-provisioner=true in "addons-823768"
I0120 15:05:44.941973 2137369 addons.go:69] Setting ingress=true in profile "addons-823768"
I0120 15:05:44.941987 2137369 addons.go:69] Setting metrics-server=true in profile "addons-823768"
I0120 15:05:44.942006 2137369 addons.go:238] Setting addon ingress=true in "addons-823768"
I0120 15:05:44.942008 2137369 addons.go:238] Setting addon metrics-server=true in "addons-823768"
I0120 15:05:44.941157 2137369 addons.go:69] Setting yakd=true in profile "addons-823768"
I0120 15:05:44.942019 2137369 addons.go:69] Setting inspektor-gadget=true in profile "addons-823768"
I0120 15:05:44.942024 2137369 addons.go:238] Setting addon yakd=true in "addons-823768"
I0120 15:05:44.942029 2137369 addons.go:238] Setting addon inspektor-gadget=true in "addons-823768"
I0120 15:05:44.941768 2137369 main.go:141] libmachine: Launching plugin server for driver kvm2
I0120 15:05:44.942160 2137369 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0120 15:05:44.942193 2137369 main.go:141] libmachine: Launching plugin server for driver kvm2
I0120 15:05:44.942221 2137369 host.go:66] Checking if "addons-823768" exists ...
I0120 15:05:44.942248 2137369 host.go:66] Checking if "addons-823768" exists ...
I0120 15:05:44.942361 2137369 host.go:66] Checking if "addons-823768" exists ...
I0120 15:05:44.942512 2137369 host.go:66] Checking if "addons-823768" exists ...
I0120 15:05:44.942664 2137369 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0120 15:05:44.942678 2137369 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0120 15:05:44.942701 2137369 main.go:141] libmachine: Launching plugin server for driver kvm2
I0120 15:05:44.942711 2137369 main.go:141] libmachine: Launching plugin server for driver kvm2
I0120 15:05:44.942769 2137369 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0120 15:05:44.942790 2137369 host.go:66] Checking if "addons-823768" exists ...
I0120 15:05:44.942801 2137369 main.go:141] libmachine: Launching plugin server for driver kvm2
I0120 15:05:44.942867 2137369 host.go:66] Checking if "addons-823768" exists ...
I0120 15:05:44.943045 2137369 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0120 15:05:44.943083 2137369 main.go:141] libmachine: Launching plugin server for driver kvm2
I0120 15:05:44.943132 2137369 host.go:66] Checking if "addons-823768" exists ...
I0120 15:05:44.943150 2137369 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0120 15:05:44.943180 2137369 main.go:141] libmachine: Launching plugin server for driver kvm2
I0120 15:05:44.943246 2137369 host.go:66] Checking if "addons-823768" exists ...
I0120 15:05:44.943315 2137369 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0120 15:05:44.943346 2137369 main.go:141] libmachine: Launching plugin server for driver kvm2
I0120 15:05:44.943438 2137369 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0120 15:05:44.943467 2137369 main.go:141] libmachine: Launching plugin server for driver kvm2
I0120 15:05:44.943517 2137369 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0120 15:05:44.943543 2137369 main.go:141] libmachine: Launching plugin server for driver kvm2
I0120 15:05:44.944549 2137369 out.go:177] * Verifying Kubernetes components...
I0120 15:05:44.946093 2137369 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0120 15:05:44.959677 2137369 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41897
I0120 15:05:44.960011 2137369 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42423
I0120 15:05:44.961799 2137369 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44457
I0120 15:05:44.962224 2137369 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43897
I0120 15:05:44.975456 2137369 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0120 15:05:44.975521 2137369 main.go:141] libmachine: Launching plugin server for driver kvm2
I0120 15:05:44.977858 2137369 main.go:141] libmachine: () Calling .GetVersion
I0120 15:05:44.977901 2137369 main.go:141] libmachine: () Calling .GetVersion
I0120 15:05:44.977974 2137369 main.go:141] libmachine: () Calling .GetVersion
I0120 15:05:44.978023 2137369 main.go:141] libmachine: () Calling .GetVersion
I0120 15:05:44.979595 2137369 main.go:141] libmachine: Using API Version 1
I0120 15:05:44.979622 2137369 main.go:141] libmachine: () Calling .SetConfigRaw
I0120 15:05:44.979676 2137369 main.go:141] libmachine: Using API Version 1
I0120 15:05:44.979696 2137369 main.go:141] libmachine: () Calling .SetConfigRaw
I0120 15:05:44.979754 2137369 main.go:141] libmachine: Using API Version 1
I0120 15:05:44.979765 2137369 main.go:141] libmachine: () Calling .SetConfigRaw
I0120 15:05:44.979780 2137369 main.go:141] libmachine: Using API Version 1
I0120 15:05:44.979793 2137369 main.go:141] libmachine: () Calling .SetConfigRaw
I0120 15:05:44.980051 2137369 main.go:141] libmachine: () Calling .GetMachineName
I0120 15:05:44.980728 2137369 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0120 15:05:44.980771 2137369 main.go:141] libmachine: Launching plugin server for driver kvm2
I0120 15:05:44.981025 2137369 main.go:141] libmachine: () Calling .GetMachineName
I0120 15:05:44.981103 2137369 main.go:141] libmachine: () Calling .GetMachineName
I0120 15:05:44.981151 2137369 main.go:141] libmachine: () Calling .GetMachineName
I0120 15:05:44.981247 2137369 main.go:141] libmachine: (addons-823768) Calling .GetState
I0120 15:05:44.981674 2137369 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0120 15:05:44.981709 2137369 main.go:141] libmachine: Launching plugin server for driver kvm2
I0120 15:05:44.990376 2137369 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0120 15:05:44.990438 2137369 main.go:141] libmachine: Launching plugin server for driver kvm2
I0120 15:05:44.992035 2137369 addons.go:238] Setting addon default-storageclass=true in "addons-823768"
I0120 15:05:44.992096 2137369 host.go:66] Checking if "addons-823768" exists ...
I0120 15:05:44.992479 2137369 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0120 15:05:44.992535 2137369 main.go:141] libmachine: Launching plugin server for driver kvm2
I0120 15:05:45.012533 2137369 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45025
I0120 15:05:45.013196 2137369 main.go:141] libmachine: () Calling .GetVersion
I0120 15:05:45.013883 2137369 main.go:141] libmachine: Using API Version 1
I0120 15:05:45.013910 2137369 main.go:141] libmachine: () Calling .SetConfigRaw
I0120 15:05:45.014339 2137369 main.go:141] libmachine: () Calling .GetMachineName
I0120 15:05:45.015001 2137369 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0120 15:05:45.015058 2137369 main.go:141] libmachine: Launching plugin server for driver kvm2
I0120 15:05:45.015378 2137369 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38869
I0120 15:05:45.015747 2137369 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35953
I0120 15:05:45.015858 2137369 main.go:141] libmachine: () Calling .GetVersion
I0120 15:05:45.016102 2137369 main.go:141] libmachine: () Calling .GetVersion
I0120 15:05:45.016315 2137369 main.go:141] libmachine: Using API Version 1
I0120 15:05:45.016336 2137369 main.go:141] libmachine: () Calling .SetConfigRaw
I0120 15:05:45.016397 2137369 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41169
I0120 15:05:45.016643 2137369 main.go:141] libmachine: () Calling .GetMachineName
I0120 15:05:45.016800 2137369 main.go:141] libmachine: (addons-823768) Calling .GetState
I0120 15:05:45.016927 2137369 main.go:141] libmachine: Using API Version 1
I0120 15:05:45.016938 2137369 main.go:141] libmachine: () Calling .SetConfigRaw
I0120 15:05:45.017161 2137369 main.go:141] libmachine: () Calling .GetVersion
I0120 15:05:45.017666 2137369 main.go:141] libmachine: Using API Version 1
I0120 15:05:45.017682 2137369 main.go:141] libmachine: () Calling .SetConfigRaw
I0120 15:05:45.018062 2137369 main.go:141] libmachine: () Calling .GetMachineName
I0120 15:05:45.018681 2137369 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0120 15:05:45.018724 2137369 main.go:141] libmachine: Launching plugin server for driver kvm2
I0120 15:05:45.018957 2137369 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39807
I0120 15:05:45.018991 2137369 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38691
I0120 15:05:45.019076 2137369 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41481
I0120 15:05:45.019360 2137369 main.go:141] libmachine: () Calling .GetMachineName
I0120 15:05:45.019429 2137369 main.go:141] libmachine: () Calling .GetVersion
I0120 15:05:45.020018 2137369 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0120 15:05:45.020059 2137369 main.go:141] libmachine: Launching plugin server for driver kvm2
I0120 15:05:45.020141 2137369 addons.go:238] Setting addon storage-provisioner-rancher=true in "addons-823768"
I0120 15:05:45.020185 2137369 host.go:66] Checking if "addons-823768" exists ...
I0120 15:05:45.020270 2137369 main.go:141] libmachine: () Calling .GetVersion
I0120 15:05:45.020464 2137369 main.go:141] libmachine: Using API Version 1
I0120 15:05:45.020478 2137369 main.go:141] libmachine: () Calling .SetConfigRaw
I0120 15:05:45.020539 2137369 main.go:141] libmachine: () Calling .GetVersion
I0120 15:05:45.020546 2137369 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0120 15:05:45.020581 2137369 main.go:141] libmachine: Launching plugin server for driver kvm2
I0120 15:05:45.020619 2137369 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37543
I0120 15:05:45.021207 2137369 main.go:141] libmachine: Using API Version 1
I0120 15:05:45.021225 2137369 main.go:141] libmachine: () Calling .SetConfigRaw
I0120 15:05:45.021347 2137369 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46243
I0120 15:05:45.021477 2137369 main.go:141] libmachine: () Calling .GetVersion
I0120 15:05:45.021943 2137369 main.go:141] libmachine: Using API Version 1
I0120 15:05:45.021966 2137369 main.go:141] libmachine: () Calling .SetConfigRaw
I0120 15:05:45.022029 2137369 main.go:141] libmachine: () Calling .GetMachineName
I0120 15:05:45.022276 2137369 main.go:141] libmachine: (addons-823768) Calling .GetState
I0120 15:05:45.022435 2137369 main.go:141] libmachine: Using API Version 1
I0120 15:05:45.022448 2137369 main.go:141] libmachine: () Calling .SetConfigRaw
I0120 15:05:45.022804 2137369 main.go:141] libmachine: () Calling .GetMachineName
I0120 15:05:45.022874 2137369 main.go:141] libmachine: () Calling .GetVersion
I0120 15:05:45.023429 2137369 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0120 15:05:45.023466 2137369 main.go:141] libmachine: Launching plugin server for driver kvm2
I0120 15:05:45.023660 2137369 main.go:141] libmachine: () Calling .GetMachineName
I0120 15:05:45.023716 2137369 main.go:141] libmachine: () Calling .GetMachineName
I0120 15:05:45.024288 2137369 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0120 15:05:45.024332 2137369 main.go:141] libmachine: Launching plugin server for driver kvm2
I0120 15:05:45.024435 2137369 main.go:141] libmachine: (addons-823768) Calling .DriverName
I0120 15:05:45.024590 2137369 main.go:141] libmachine: Using API Version 1
I0120 15:05:45.024603 2137369 main.go:141] libmachine: () Calling .SetConfigRaw
I0120 15:05:45.025568 2137369 main.go:141] libmachine: () Calling .GetMachineName
I0120 15:05:45.026786 2137369 out.go:177] - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
I0120 15:05:45.028110 2137369 out.go:177] - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
I0120 15:05:45.030717 2137369 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37701
I0120 15:05:45.031353 2137369 main.go:141] libmachine: () Calling .GetVersion
I0120 15:05:45.031932 2137369 main.go:141] libmachine: Using API Version 1
I0120 15:05:45.031958 2137369 main.go:141] libmachine: () Calling .SetConfigRaw
I0120 15:05:45.032368 2137369 main.go:141] libmachine: () Calling .GetMachineName
I0120 15:05:45.032579 2137369 main.go:141] libmachine: (addons-823768) Calling .GetState
I0120 15:05:45.032756 2137369 out.go:177] - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
I0120 15:05:45.034385 2137369 host.go:66] Checking if "addons-823768" exists ...
I0120 15:05:45.034810 2137369 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0120 15:05:45.034864 2137369 main.go:141] libmachine: Launching plugin server for driver kvm2
I0120 15:05:45.035693 2137369 out.go:177] - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
I0120 15:05:45.036864 2137369 out.go:177] - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
I0120 15:05:45.038073 2137369 out.go:177] - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
I0120 15:05:45.039402 2137369 out.go:177] - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
I0120 15:05:45.040775 2137369 out.go:177] - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
I0120 15:05:45.041455 2137369 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38573
I0120 15:05:45.041873 2137369 addons.go:435] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
I0120 15:05:45.041899 2137369 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
I0120 15:05:45.041928 2137369 main.go:141] libmachine: (addons-823768) Calling .GetSSHHostname
I0120 15:05:45.043759 2137369 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41487
I0120 15:05:45.044358 2137369 main.go:141] libmachine: () Calling .GetVersion
I0120 15:05:45.045115 2137369 main.go:141] libmachine: Using API Version 1
I0120 15:05:45.045137 2137369 main.go:141] libmachine: () Calling .SetConfigRaw
I0120 15:05:45.045935 2137369 main.go:141] libmachine: () Calling .GetMachineName
I0120 15:05:45.045941 2137369 main.go:141] libmachine: () Calling .GetVersion
I0120 15:05:45.046009 2137369 main.go:141] libmachine: (addons-823768) DBG | domain addons-823768 has defined MAC address 52:54:00:25:8d:22 in network mk-addons-823768
I0120 15:05:45.046409 2137369 main.go:141] libmachine: (addons-823768) Calling .GetState
I0120 15:05:45.046468 2137369 main.go:141] libmachine: (addons-823768) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:8d:22", ip: ""} in network mk-addons-823768: {Iface:virbr1 ExpiryTime:2025-01-20 16:05:13 +0000 UTC Type:0 Mac:52:54:00:25:8d:22 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:addons-823768 Clientid:01:52:54:00:25:8d:22}
I0120 15:05:45.046483 2137369 main.go:141] libmachine: (addons-823768) DBG | domain addons-823768 has defined IP address 192.168.39.158 and MAC address 52:54:00:25:8d:22 in network mk-addons-823768
I0120 15:05:45.046577 2137369 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36929
I0120 15:05:45.046798 2137369 main.go:141] libmachine: (addons-823768) Calling .GetSSHPort
I0120 15:05:45.047024 2137369 main.go:141] libmachine: () Calling .GetVersion
I0120 15:05:45.047144 2137369 main.go:141] libmachine: (addons-823768) Calling .GetSSHKeyPath
I0120 15:05:45.047300 2137369 main.go:141] libmachine: (addons-823768) Calling .GetSSHUsername
I0120 15:05:45.047464 2137369 sshutil.go:53] new ssh client: &{IP:192.168.39.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20109-2129584/.minikube/machines/addons-823768/id_rsa Username:docker}
I0120 15:05:45.047824 2137369 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42519
I0120 15:05:45.047935 2137369 main.go:141] libmachine: Using API Version 1
I0120 15:05:45.047955 2137369 main.go:141] libmachine: () Calling .SetConfigRaw
I0120 15:05:45.048202 2137369 main.go:141] libmachine: () Calling .GetVersion
I0120 15:05:45.048664 2137369 main.go:141] libmachine: () Calling .GetMachineName
I0120 15:05:45.048738 2137369 main.go:141] libmachine: Using API Version 1
I0120 15:05:45.048759 2137369 main.go:141] libmachine: () Calling .SetConfigRaw
I0120 15:05:45.049129 2137369 main.go:141] libmachine: () Calling .GetMachineName
I0120 15:05:45.049227 2137369 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0120 15:05:45.049283 2137369 main.go:141] libmachine: Launching plugin server for driver kvm2
I0120 15:05:45.049394 2137369 main.go:141] libmachine: Using API Version 1
I0120 15:05:45.049412 2137369 main.go:141] libmachine: () Calling .SetConfigRaw
I0120 15:05:45.050189 2137369 main.go:141] libmachine: () Calling .GetMachineName
I0120 15:05:45.050309 2137369 main.go:141] libmachine: (addons-823768) Calling .DriverName
I0120 15:05:45.050835 2137369 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0120 15:05:45.050876 2137369 main.go:141] libmachine: Launching plugin server for driver kvm2
I0120 15:05:45.052764 2137369 out.go:177] - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
I0120 15:05:45.054269 2137369 addons.go:435] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
I0120 15:05:45.054297 2137369 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
I0120 15:05:45.054320 2137369 main.go:141] libmachine: (addons-823768) Calling .GetSSHHostname
I0120 15:05:45.055060 2137369 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46385
I0120 15:05:45.058077 2137369 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44745
I0120 15:05:45.058077 2137369 main.go:141] libmachine: (addons-823768) DBG | domain addons-823768 has defined MAC address 52:54:00:25:8d:22 in network mk-addons-823768
I0120 15:05:45.058568 2137369 main.go:141] libmachine: (addons-823768) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:8d:22", ip: ""} in network mk-addons-823768: {Iface:virbr1 ExpiryTime:2025-01-20 16:05:13 +0000 UTC Type:0 Mac:52:54:00:25:8d:22 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:addons-823768 Clientid:01:52:54:00:25:8d:22}
I0120 15:05:45.058679 2137369 main.go:141] libmachine: (addons-823768) DBG | domain addons-823768 has defined IP address 192.168.39.158 and MAC address 52:54:00:25:8d:22 in network mk-addons-823768
I0120 15:05:45.059020 2137369 main.go:141] libmachine: (addons-823768) Calling .GetSSHPort
I0120 15:05:45.059104 2137369 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42395
I0120 15:05:45.059253 2137369 main.go:141] libmachine: (addons-823768) Calling .GetSSHKeyPath
I0120 15:05:45.059422 2137369 main.go:141] libmachine: (addons-823768) Calling .GetSSHUsername
I0120 15:05:45.059552 2137369 sshutil.go:53] new ssh client: &{IP:192.168.39.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20109-2129584/.minikube/machines/addons-823768/id_rsa Username:docker}
I0120 15:05:45.063381 2137369 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0120 15:05:45.063387 2137369 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0120 15:05:45.063439 2137369 main.go:141] libmachine: Launching plugin server for driver kvm2
I0120 15:05:45.063467 2137369 main.go:141] libmachine: Launching plugin server for driver kvm2
I0120 15:05:45.063536 2137369 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45445
I0120 15:05:45.063633 2137369 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46097
I0120 15:05:45.063727 2137369 main.go:141] libmachine: () Calling .GetVersion
I0120 15:05:45.064081 2137369 main.go:141] libmachine: () Calling .GetVersion
I0120 15:05:45.064189 2137369 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0120 15:05:45.064230 2137369 main.go:141] libmachine: Launching plugin server for driver kvm2
I0120 15:05:45.064315 2137369 main.go:141] libmachine: () Calling .GetVersion
I0120 15:05:45.064334 2137369 main.go:141] libmachine: () Calling .GetVersion
I0120 15:05:45.064318 2137369 main.go:141] libmachine: Using API Version 1
I0120 15:05:45.064393 2137369 main.go:141] libmachine: () Calling .SetConfigRaw
I0120 15:05:45.064400 2137369 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33135
I0120 15:05:45.064875 2137369 main.go:141] libmachine: () Calling .GetVersion
I0120 15:05:45.064889 2137369 main.go:141] libmachine: Using API Version 1
I0120 15:05:45.064909 2137369 main.go:141] libmachine: () Calling .SetConfigRaw
I0120 15:05:45.064977 2137369 main.go:141] libmachine: () Calling .GetMachineName
I0120 15:05:45.065050 2137369 main.go:141] libmachine: Using API Version 1
I0120 15:05:45.065064 2137369 main.go:141] libmachine: () Calling .SetConfigRaw
I0120 15:05:45.065201 2137369 main.go:141] libmachine: Using API Version 1
I0120 15:05:45.065215 2137369 main.go:141] libmachine: () Calling .SetConfigRaw
I0120 15:05:45.065388 2137369 main.go:141] libmachine: Using API Version 1
I0120 15:05:45.065403 2137369 main.go:141] libmachine: () Calling .SetConfigRaw
I0120 15:05:45.065458 2137369 main.go:141] libmachine: (addons-823768) Calling .GetState
I0120 15:05:45.065499 2137369 main.go:141] libmachine: () Calling .GetMachineName
I0120 15:05:45.066244 2137369 main.go:141] libmachine: () Calling .GetVersion
I0120 15:05:45.066355 2137369 main.go:141] libmachine: () Calling .GetMachineName
I0120 15:05:45.066407 2137369 main.go:141] libmachine: (addons-823768) Calling .GetState
I0120 15:05:45.066452 2137369 main.go:141] libmachine: () Calling .GetMachineName
I0120 15:05:45.066496 2137369 main.go:141] libmachine: () Calling .GetMachineName
I0120 15:05:45.067341 2137369 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0120 15:05:45.067385 2137369 main.go:141] libmachine: Launching plugin server for driver kvm2
I0120 15:05:45.067968 2137369 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0120 15:05:45.068002 2137369 main.go:141] libmachine: Using API Version 1
I0120 15:05:45.068008 2137369 main.go:141] libmachine: Launching plugin server for driver kvm2
I0120 15:05:45.068018 2137369 main.go:141] libmachine: () Calling .SetConfigRaw
I0120 15:05:45.068091 2137369 main.go:141] libmachine: (addons-823768) Calling .GetState
I0120 15:05:45.068266 2137369 main.go:141] libmachine: (addons-823768) Calling .DriverName
I0120 15:05:45.068547 2137369 main.go:141] libmachine: () Calling .GetMachineName
I0120 15:05:45.068806 2137369 main.go:141] libmachine: (addons-823768) Calling .GetState
I0120 15:05:45.068847 2137369 main.go:141] libmachine: (addons-823768) Calling .DriverName
I0120 15:05:45.070172 2137369 main.go:141] libmachine: (addons-823768) Calling .DriverName
I0120 15:05:45.070709 2137369 out.go:177] - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.36.0
I0120 15:05:45.070818 2137369 out.go:177] - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
I0120 15:05:45.072050 2137369 addons.go:435] installing /etc/kubernetes/addons/ig-crd.yaml
I0120 15:05:45.072074 2137369 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5248 bytes)
I0120 15:05:45.072104 2137369 main.go:141] libmachine: (addons-823768) Calling .DriverName
I0120 15:05:45.072104 2137369 main.go:141] libmachine: (addons-823768) Calling .GetSSHHostname
I0120 15:05:45.072056 2137369 out.go:177] - Using image nvcr.io/nvidia/k8s-device-plugin:v0.17.0
I0120 15:05:45.073050 2137369 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
I0120 15:05:45.073066 2137369 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
I0120 15:05:45.073096 2137369 main.go:141] libmachine: (addons-823768) Calling .GetSSHHostname
I0120 15:05:45.073987 2137369 out.go:177] - Using image registry.k8s.io/ingress-nginx/controller:v1.11.3
I0120 15:05:45.074342 2137369 addons.go:435] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
I0120 15:05:45.074359 2137369 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
I0120 15:05:45.074379 2137369 main.go:141] libmachine: (addons-823768) Calling .GetSSHHostname
I0120 15:05:45.077368 2137369 out.go:177] - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
I0120 15:05:45.078567 2137369 main.go:141] libmachine: (addons-823768) DBG | domain addons-823768 has defined MAC address 52:54:00:25:8d:22 in network mk-addons-823768
I0120 15:05:45.079183 2137369 main.go:141] libmachine: (addons-823768) DBG | domain addons-823768 has defined MAC address 52:54:00:25:8d:22 in network mk-addons-823768
I0120 15:05:45.080151 2137369 out.go:177] - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
I0120 15:05:45.080251 2137369 main.go:141] libmachine: (addons-823768) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:8d:22", ip: ""} in network mk-addons-823768: {Iface:virbr1 ExpiryTime:2025-01-20 16:05:13 +0000 UTC Type:0 Mac:52:54:00:25:8d:22 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:addons-823768 Clientid:01:52:54:00:25:8d:22}
I0120 15:05:45.080282 2137369 main.go:141] libmachine: (addons-823768) DBG | domain addons-823768 has defined IP address 192.168.39.158 and MAC address 52:54:00:25:8d:22 in network mk-addons-823768
I0120 15:05:45.080851 2137369 main.go:141] libmachine: (addons-823768) DBG | domain addons-823768 has defined MAC address 52:54:00:25:8d:22 in network mk-addons-823768
I0120 15:05:45.081141 2137369 main.go:141] libmachine: (addons-823768) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:8d:22", ip: ""} in network mk-addons-823768: {Iface:virbr1 ExpiryTime:2025-01-20 16:05:13 +0000 UTC Type:0 Mac:52:54:00:25:8d:22 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:addons-823768 Clientid:01:52:54:00:25:8d:22}
I0120 15:05:45.081161 2137369 main.go:141] libmachine: (addons-823768) DBG | domain addons-823768 has defined IP address 192.168.39.158 and MAC address 52:54:00:25:8d:22 in network mk-addons-823768
I0120 15:05:45.081487 2137369 addons.go:435] installing /etc/kubernetes/addons/ingress-deploy.yaml
I0120 15:05:45.081508 2137369 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
I0120 15:05:45.081529 2137369 main.go:141] libmachine: (addons-823768) Calling .GetSSHHostname
I0120 15:05:45.081534 2137369 main.go:141] libmachine: (addons-823768) Calling .GetSSHPort
I0120 15:05:45.081928 2137369 main.go:141] libmachine: (addons-823768) Calling .GetSSHKeyPath
I0120 15:05:45.081993 2137369 main.go:141] libmachine: (addons-823768) Calling .GetSSHPort
I0120 15:05:45.082052 2137369 main.go:141] libmachine: (addons-823768) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:8d:22", ip: ""} in network mk-addons-823768: {Iface:virbr1 ExpiryTime:2025-01-20 16:05:13 +0000 UTC Type:0 Mac:52:54:00:25:8d:22 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:addons-823768 Clientid:01:52:54:00:25:8d:22}
I0120 15:05:45.082065 2137369 main.go:141] libmachine: (addons-823768) DBG | domain addons-823768 has defined IP address 192.168.39.158 and MAC address 52:54:00:25:8d:22 in network mk-addons-823768
I0120 15:05:45.082637 2137369 main.go:141] libmachine: (addons-823768) Calling .GetSSHUsername
I0120 15:05:45.082689 2137369 main.go:141] libmachine: (addons-823768) Calling .GetSSHPort
I0120 15:05:45.082714 2137369 main.go:141] libmachine: (addons-823768) Calling .GetSSHKeyPath
I0120 15:05:45.083344 2137369 main.go:141] libmachine: (addons-823768) Calling .GetSSHUsername
I0120 15:05:45.083414 2137369 main.go:141] libmachine: (addons-823768) Calling .GetSSHKeyPath
I0120 15:05:45.083588 2137369 main.go:141] libmachine: (addons-823768) Calling .GetSSHUsername
I0120 15:05:45.084012 2137369 sshutil.go:53] new ssh client: &{IP:192.168.39.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20109-2129584/.minikube/machines/addons-823768/id_rsa Username:docker}
I0120 15:05:45.084190 2137369 sshutil.go:53] new ssh client: &{IP:192.168.39.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20109-2129584/.minikube/machines/addons-823768/id_rsa Username:docker}
I0120 15:05:45.084795 2137369 sshutil.go:53] new ssh client: &{IP:192.168.39.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20109-2129584/.minikube/machines/addons-823768/id_rsa Username:docker}
I0120 15:05:45.085120 2137369 main.go:141] libmachine: (addons-823768) DBG | domain addons-823768 has defined MAC address 52:54:00:25:8d:22 in network mk-addons-823768
I0120 15:05:45.085772 2137369 main.go:141] libmachine: (addons-823768) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:8d:22", ip: ""} in network mk-addons-823768: {Iface:virbr1 ExpiryTime:2025-01-20 16:05:13 +0000 UTC Type:0 Mac:52:54:00:25:8d:22 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:addons-823768 Clientid:01:52:54:00:25:8d:22}
I0120 15:05:45.085809 2137369 main.go:141] libmachine: (addons-823768) DBG | domain addons-823768 has defined IP address 192.168.39.158 and MAC address 52:54:00:25:8d:22 in network mk-addons-823768
I0120 15:05:45.085817 2137369 main.go:141] libmachine: (addons-823768) Calling .GetSSHPort
I0120 15:05:45.085975 2137369 main.go:141] libmachine: (addons-823768) Calling .GetSSHKeyPath
I0120 15:05:45.086122 2137369 main.go:141] libmachine: (addons-823768) Calling .GetSSHUsername
I0120 15:05:45.086288 2137369 sshutil.go:53] new ssh client: &{IP:192.168.39.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20109-2129584/.minikube/machines/addons-823768/id_rsa Username:docker}
I0120 15:05:45.086641 2137369 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34043
I0120 15:05:45.090566 2137369 main.go:141] libmachine: () Calling .GetVersion
I0120 15:05:45.091280 2137369 main.go:141] libmachine: Using API Version 1
I0120 15:05:45.091307 2137369 main.go:141] libmachine: () Calling .SetConfigRaw
I0120 15:05:45.091792 2137369 main.go:141] libmachine: () Calling .GetMachineName
I0120 15:05:45.091991 2137369 main.go:141] libmachine: (addons-823768) Calling .GetState
I0120 15:05:45.093777 2137369 main.go:141] libmachine: (addons-823768) Calling .DriverName
I0120 15:05:45.094693 2137369 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46735
I0120 15:05:45.095349 2137369 main.go:141] libmachine: () Calling .GetVersion
I0120 15:05:45.095731 2137369 out.go:177] - Using image docker.io/marcnuri/yakd:0.0.5
I0120 15:05:45.095944 2137369 main.go:141] libmachine: Using API Version 1
I0120 15:05:45.095969 2137369 main.go:141] libmachine: () Calling .SetConfigRaw
I0120 15:05:45.096365 2137369 main.go:141] libmachine: () Calling .GetMachineName
I0120 15:05:45.096584 2137369 main.go:141] libmachine: (addons-823768) Calling .GetState
I0120 15:05:45.096879 2137369 addons.go:435] installing /etc/kubernetes/addons/yakd-ns.yaml
I0120 15:05:45.096896 2137369 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
I0120 15:05:45.096918 2137369 main.go:141] libmachine: (addons-823768) Calling .GetSSHHostname
I0120 15:05:45.098679 2137369 main.go:141] libmachine: (addons-823768) Calling .DriverName
I0120 15:05:45.098923 2137369 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
I0120 15:05:45.098946 2137369 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
I0120 15:05:45.098964 2137369 main.go:141] libmachine: (addons-823768) Calling .GetSSHHostname
I0120 15:05:45.099552 2137369 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45721
I0120 15:05:45.100021 2137369 main.go:141] libmachine: () Calling .GetVersion
I0120 15:05:45.100586 2137369 main.go:141] libmachine: Using API Version 1
I0120 15:05:45.100612 2137369 main.go:141] libmachine: () Calling .SetConfigRaw
I0120 15:05:45.100957 2137369 main.go:141] libmachine: () Calling .GetMachineName
I0120 15:05:45.101159 2137369 main.go:141] libmachine: (addons-823768) Calling .GetState
I0120 15:05:45.101709 2137369 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38587
I0120 15:05:45.102248 2137369 main.go:141] libmachine: () Calling .GetVersion
I0120 15:05:45.102752 2137369 main.go:141] libmachine: (addons-823768) DBG | domain addons-823768 has defined MAC address 52:54:00:25:8d:22 in network mk-addons-823768
I0120 15:05:45.102794 2137369 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34471
I0120 15:05:45.102845 2137369 main.go:141] libmachine: Using API Version 1
I0120 15:05:45.102863 2137369 main.go:141] libmachine: () Calling .SetConfigRaw
I0120 15:05:45.102927 2137369 main.go:141] libmachine: (addons-823768) DBG | domain addons-823768 has defined MAC address 52:54:00:25:8d:22 in network mk-addons-823768
I0120 15:05:45.103224 2137369 main.go:141] libmachine: () Calling .GetMachineName
I0120 15:05:45.103297 2137369 main.go:141] libmachine: () Calling .GetVersion
I0120 15:05:45.103391 2137369 main.go:141] libmachine: (addons-823768) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:8d:22", ip: ""} in network mk-addons-823768: {Iface:virbr1 ExpiryTime:2025-01-20 16:05:13 +0000 UTC Type:0 Mac:52:54:00:25:8d:22 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:addons-823768 Clientid:01:52:54:00:25:8d:22}
I0120 15:05:45.103406 2137369 main.go:141] libmachine: (addons-823768) DBG | domain addons-823768 has defined IP address 192.168.39.158 and MAC address 52:54:00:25:8d:22 in network mk-addons-823768
I0120 15:05:45.103603 2137369 main.go:141] libmachine: (addons-823768) Calling .GetSSHPort
I0120 15:05:45.103859 2137369 main.go:141] libmachine: (addons-823768) Calling .GetSSHPort
I0120 15:05:45.103886 2137369 main.go:141] libmachine: (addons-823768) Calling .GetSSHKeyPath
I0120 15:05:45.103960 2137369 main.go:141] libmachine: (addons-823768) Calling .DriverName
I0120 15:05:45.104015 2137369 main.go:141] libmachine: (addons-823768) Calling .GetSSHUsername
I0120 15:05:45.104017 2137369 main.go:141] libmachine: Using API Version 1
I0120 15:05:45.104033 2137369 main.go:141] libmachine: () Calling .SetConfigRaw
I0120 15:05:45.104060 2137369 main.go:141] libmachine: (addons-823768) Calling .DriverName
I0120 15:05:45.104106 2137369 sshutil.go:53] new ssh client: &{IP:192.168.39.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20109-2129584/.minikube/machines/addons-823768/id_rsa Username:docker}
I0120 15:05:45.104559 2137369 main.go:141] libmachine: (addons-823768) Calling .GetSSHKeyPath
I0120 15:05:45.104625 2137369 main.go:141] libmachine: () Calling .GetMachineName
I0120 15:05:45.104773 2137369 main.go:141] libmachine: (addons-823768) Calling .GetSSHUsername
I0120 15:05:45.104831 2137369 main.go:141] libmachine: (addons-823768) Calling .GetState
I0120 15:05:45.104961 2137369 sshutil.go:53] new ssh client: &{IP:192.168.39.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20109-2129584/.minikube/machines/addons-823768/id_rsa Username:docker}
I0120 15:05:45.105289 2137369 main.go:141] libmachine: (addons-823768) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:8d:22", ip: ""} in network mk-addons-823768: {Iface:virbr1 ExpiryTime:2025-01-20 16:05:13 +0000 UTC Type:0 Mac:52:54:00:25:8d:22 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:addons-823768 Clientid:01:52:54:00:25:8d:22}
I0120 15:05:45.105873 2137369 out.go:177] - Using image docker.io/rancher/local-path-provisioner:v0.0.22
I0120 15:05:45.105892 2137369 main.go:141] libmachine: (addons-823768) DBG | domain addons-823768 has defined IP address 192.168.39.158 and MAC address 52:54:00:25:8d:22 in network mk-addons-823768
I0120 15:05:45.106638 2137369 main.go:141] libmachine: (addons-823768) Calling .DriverName
I0120 15:05:45.108265 2137369 out.go:177] - Using image docker.io/busybox:stable
I0120 15:05:45.108267 2137369 out.go:177] - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
I0120 15:05:45.109934 2137369 addons.go:435] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
I0120 15:05:45.109962 2137369 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
I0120 15:05:45.109986 2137369 main.go:141] libmachine: (addons-823768) Calling .GetSSHHostname
I0120 15:05:45.109937 2137369 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
I0120 15:05:45.110047 2137369 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
I0120 15:05:45.110061 2137369 main.go:141] libmachine: (addons-823768) Calling .GetSSHHostname
I0120 15:05:45.110071 2137369 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46593
I0120 15:05:45.110709 2137369 main.go:141] libmachine: () Calling .GetVersion
I0120 15:05:45.110716 2137369 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39211
I0120 15:05:45.111211 2137369 main.go:141] libmachine: Using API Version 1
I0120 15:05:45.111237 2137369 main.go:141] libmachine: () Calling .SetConfigRaw
I0120 15:05:45.111479 2137369 main.go:141] libmachine: () Calling .GetVersion
I0120 15:05:45.116371 2137369 main.go:141] libmachine: (addons-823768) DBG | domain addons-823768 has defined MAC address 52:54:00:25:8d:22 in network mk-addons-823768
I0120 15:05:45.116407 2137369 main.go:141] libmachine: (addons-823768) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:8d:22", ip: ""} in network mk-addons-823768: {Iface:virbr1 ExpiryTime:2025-01-20 16:05:13 +0000 UTC Type:0 Mac:52:54:00:25:8d:22 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:addons-823768 Clientid:01:52:54:00:25:8d:22}
I0120 15:05:45.116419 2137369 main.go:141] libmachine: () Calling .GetMachineName
I0120 15:05:45.116428 2137369 main.go:141] libmachine: (addons-823768) DBG | domain addons-823768 has defined IP address 192.168.39.158 and MAC address 52:54:00:25:8d:22 in network mk-addons-823768
I0120 15:05:45.116378 2137369 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39517
I0120 15:05:45.116376 2137369 main.go:141] libmachine: (addons-823768) DBG | domain addons-823768 has defined MAC address 52:54:00:25:8d:22 in network mk-addons-823768
I0120 15:05:45.116503 2137369 main.go:141] libmachine: Using API Version 1
I0120 15:05:45.116539 2137369 main.go:141] libmachine: () Calling .SetConfigRaw
I0120 15:05:45.116542 2137369 main.go:141] libmachine: (addons-823768) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:8d:22", ip: ""} in network mk-addons-823768: {Iface:virbr1 ExpiryTime:2025-01-20 16:05:13 +0000 UTC Type:0 Mac:52:54:00:25:8d:22 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:addons-823768 Clientid:01:52:54:00:25:8d:22}
I0120 15:05:45.116568 2137369 main.go:141] libmachine: (addons-823768) DBG | domain addons-823768 has defined IP address 192.168.39.158 and MAC address 52:54:00:25:8d:22 in network mk-addons-823768
I0120 15:05:45.116736 2137369 main.go:141] libmachine: (addons-823768) Calling .GetState
I0120 15:05:45.116754 2137369 main.go:141] libmachine: (addons-823768) Calling .GetSSHPort
I0120 15:05:45.116795 2137369 main.go:141] libmachine: (addons-823768) Calling .GetSSHPort
I0120 15:05:45.117290 2137369 main.go:141] libmachine: (addons-823768) Calling .GetSSHKeyPath
I0120 15:05:45.117300 2137369 main.go:141] libmachine: (addons-823768) Calling .GetSSHKeyPath
I0120 15:05:45.117478 2137369 main.go:141] libmachine: (addons-823768) Calling .GetSSHUsername
I0120 15:05:45.117505 2137369 main.go:141] libmachine: (addons-823768) Calling .GetSSHUsername
I0120 15:05:45.117648 2137369 sshutil.go:53] new ssh client: &{IP:192.168.39.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20109-2129584/.minikube/machines/addons-823768/id_rsa Username:docker}
I0120 15:05:45.118043 2137369 main.go:141] libmachine: () Calling .GetVersion
I0120 15:05:45.118047 2137369 sshutil.go:53] new ssh client: &{IP:192.168.39.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20109-2129584/.minikube/machines/addons-823768/id_rsa Username:docker}
I0120 15:05:45.118553 2137369 main.go:141] libmachine: (addons-823768) Calling .DriverName
I0120 15:05:45.118863 2137369 main.go:141] libmachine: Using API Version 1
I0120 15:05:45.118882 2137369 main.go:141] libmachine: () Calling .SetConfigRaw
I0120 15:05:45.119027 2137369 main.go:141] libmachine: () Calling .GetMachineName
I0120 15:05:45.119310 2137369 main.go:141] libmachine: () Calling .GetMachineName
I0120 15:05:45.119541 2137369 main.go:141] libmachine: (addons-823768) Calling .GetState
I0120 15:05:45.119616 2137369 main.go:141] libmachine: (addons-823768) Calling .GetState
I0120 15:05:45.120800 2137369 out.go:177] - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
I0120 15:05:45.121412 2137369 main.go:141] libmachine: (addons-823768) Calling .DriverName
I0120 15:05:45.122194 2137369 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
I0120 15:05:45.122217 2137369 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
I0120 15:05:45.122238 2137369 main.go:141] libmachine: (addons-823768) Calling .GetSSHHostname
I0120 15:05:45.122255 2137369 main.go:141] libmachine: (addons-823768) Calling .DriverName
I0120 15:05:45.122572 2137369 main.go:141] libmachine: Making call to close driver server
I0120 15:05:45.122640 2137369 main.go:141] libmachine: (addons-823768) Calling .Close
I0120 15:05:45.122951 2137369 main.go:141] libmachine: (addons-823768) DBG | Closing plugin on server side
I0120 15:05:45.122971 2137369 out.go:177] - Using image gcr.io/k8s-minikube/storage-provisioner:v5
I0120 15:05:45.123063 2137369 main.go:141] libmachine: Successfully made call to close driver server
I0120 15:05:45.123075 2137369 main.go:141] libmachine: Making call to close connection to plugin binary
I0120 15:05:45.123096 2137369 main.go:141] libmachine: Making call to close driver server
I0120 15:05:45.123119 2137369 main.go:141] libmachine: (addons-823768) Calling .Close
I0120 15:05:45.123485 2137369 main.go:141] libmachine: Successfully made call to close driver server
I0120 15:05:45.123498 2137369 main.go:141] libmachine: Making call to close connection to plugin binary
W0120 15:05:45.123579 2137369 out.go:270] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
I0120 15:05:45.124438 2137369 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
I0120 15:05:45.124457 2137369 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
I0120 15:05:45.124477 2137369 main.go:141] libmachine: (addons-823768) Calling .GetSSHHostname
I0120 15:05:45.126513 2137369 main.go:141] libmachine: (addons-823768) DBG | domain addons-823768 has defined MAC address 52:54:00:25:8d:22 in network mk-addons-823768
I0120 15:05:45.126830 2137369 main.go:141] libmachine: (addons-823768) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:8d:22", ip: ""} in network mk-addons-823768: {Iface:virbr1 ExpiryTime:2025-01-20 16:05:13 +0000 UTC Type:0 Mac:52:54:00:25:8d:22 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:addons-823768 Clientid:01:52:54:00:25:8d:22}
I0120 15:05:45.126866 2137369 main.go:141] libmachine: (addons-823768) DBG | domain addons-823768 has defined IP address 192.168.39.158 and MAC address 52:54:00:25:8d:22 in network mk-addons-823768
I0120 15:05:45.127075 2137369 main.go:141] libmachine: (addons-823768) Calling .GetSSHPort
I0120 15:05:45.127259 2137369 main.go:141] libmachine: (addons-823768) Calling .GetSSHKeyPath
I0120 15:05:45.127582 2137369 main.go:141] libmachine: (addons-823768) Calling .GetSSHUsername
I0120 15:05:45.127743 2137369 sshutil.go:53] new ssh client: &{IP:192.168.39.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20109-2129584/.minikube/machines/addons-823768/id_rsa Username:docker}
I0120 15:05:45.128516 2137369 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38469
I0120 15:05:45.129042 2137369 main.go:141] libmachine: () Calling .GetVersion
I0120 15:05:45.129062 2137369 main.go:141] libmachine: (addons-823768) DBG | domain addons-823768 has defined MAC address 52:54:00:25:8d:22 in network mk-addons-823768
I0120 15:05:45.129786 2137369 main.go:141] libmachine: (addons-823768) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:8d:22", ip: ""} in network mk-addons-823768: {Iface:virbr1 ExpiryTime:2025-01-20 16:05:13 +0000 UTC Type:0 Mac:52:54:00:25:8d:22 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:addons-823768 Clientid:01:52:54:00:25:8d:22}
I0120 15:05:45.129811 2137369 main.go:141] libmachine: (addons-823768) DBG | domain addons-823768 has defined IP address 192.168.39.158 and MAC address 52:54:00:25:8d:22 in network mk-addons-823768
I0120 15:05:45.129822 2137369 main.go:141] libmachine: Using API Version 1
I0120 15:05:45.129863 2137369 main.go:141] libmachine: () Calling .SetConfigRaw
I0120 15:05:45.130021 2137369 main.go:141] libmachine: (addons-823768) Calling .GetSSHPort
I0120 15:05:45.130296 2137369 main.go:141] libmachine: () Calling .GetMachineName
I0120 15:05:45.130295 2137369 main.go:141] libmachine: (addons-823768) Calling .GetSSHKeyPath
I0120 15:05:45.130504 2137369 main.go:141] libmachine: (addons-823768) Calling .GetSSHUsername
I0120 15:05:45.130640 2137369 main.go:141] libmachine: (addons-823768) Calling .GetState
I0120 15:05:45.130691 2137369 sshutil.go:53] new ssh client: &{IP:192.168.39.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20109-2129584/.minikube/machines/addons-823768/id_rsa Username:docker}
I0120 15:05:45.131987 2137369 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42427
I0120 15:05:45.132360 2137369 main.go:141] libmachine: (addons-823768) Calling .DriverName
I0120 15:05:45.132609 2137369 main.go:141] libmachine: () Calling .GetVersion
I0120 15:05:45.133420 2137369 main.go:141] libmachine: Using API Version 1
I0120 15:05:45.133445 2137369 main.go:141] libmachine: () Calling .SetConfigRaw
I0120 15:05:45.133859 2137369 main.go:141] libmachine: () Calling .GetMachineName
I0120 15:05:45.134118 2137369 main.go:141] libmachine: (addons-823768) Calling .GetState
I0120 15:05:45.134354 2137369 out.go:177] - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.28
I0120 15:05:45.135691 2137369 addons.go:435] installing /etc/kubernetes/addons/deployment.yaml
I0120 15:05:45.135713 2137369 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
I0120 15:05:45.135737 2137369 main.go:141] libmachine: (addons-823768) Calling .GetSSHHostname
I0120 15:05:45.135899 2137369 main.go:141] libmachine: (addons-823768) Calling .DriverName
I0120 15:05:45.137430 2137369 out.go:177] - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.8
I0120 15:05:45.138646 2137369 out.go:177] - Using image docker.io/registry:2.8.3
I0120 15:05:45.138920 2137369 main.go:141] libmachine: (addons-823768) DBG | domain addons-823768 has defined MAC address 52:54:00:25:8d:22 in network mk-addons-823768
I0120 15:05:45.139336 2137369 main.go:141] libmachine: (addons-823768) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:8d:22", ip: ""} in network mk-addons-823768: {Iface:virbr1 ExpiryTime:2025-01-20 16:05:13 +0000 UTC Type:0 Mac:52:54:00:25:8d:22 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:addons-823768 Clientid:01:52:54:00:25:8d:22}
I0120 15:05:45.139350 2137369 main.go:141] libmachine: (addons-823768) DBG | domain addons-823768 has defined IP address 192.168.39.158 and MAC address 52:54:00:25:8d:22 in network mk-addons-823768
I0120 15:05:45.139552 2137369 main.go:141] libmachine: (addons-823768) Calling .GetSSHPort
I0120 15:05:45.139760 2137369 main.go:141] libmachine: (addons-823768) Calling .GetSSHKeyPath
I0120 15:05:45.139903 2137369 addons.go:435] installing /etc/kubernetes/addons/registry-rc.yaml
I0120 15:05:45.139927 2137369 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
I0120 15:05:45.139947 2137369 main.go:141] libmachine: (addons-823768) Calling .GetSSHHostname
I0120 15:05:45.139913 2137369 main.go:141] libmachine: (addons-823768) Calling .GetSSHUsername
I0120 15:05:45.140150 2137369 sshutil.go:53] new ssh client: &{IP:192.168.39.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20109-2129584/.minikube/machines/addons-823768/id_rsa Username:docker}
W0120 15:05:45.141472 2137369 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:33340->192.168.39.158:22: read: connection reset by peer
I0120 15:05:45.141659 2137369 retry.go:31] will retry after 248.832256ms: ssh: handshake failed: read tcp 192.168.39.1:33340->192.168.39.158:22: read: connection reset by peer
I0120 15:05:45.143344 2137369 main.go:141] libmachine: (addons-823768) DBG | domain addons-823768 has defined MAC address 52:54:00:25:8d:22 in network mk-addons-823768
I0120 15:05:45.143825 2137369 main.go:141] libmachine: (addons-823768) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:8d:22", ip: ""} in network mk-addons-823768: {Iface:virbr1 ExpiryTime:2025-01-20 16:05:13 +0000 UTC Type:0 Mac:52:54:00:25:8d:22 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:addons-823768 Clientid:01:52:54:00:25:8d:22}
I0120 15:05:45.143969 2137369 main.go:141] libmachine: (addons-823768) DBG | domain addons-823768 has defined IP address 192.168.39.158 and MAC address 52:54:00:25:8d:22 in network mk-addons-823768
I0120 15:05:45.144003 2137369 main.go:141] libmachine: (addons-823768) Calling .GetSSHPort
I0120 15:05:45.144223 2137369 main.go:141] libmachine: (addons-823768) Calling .GetSSHKeyPath
I0120 15:05:45.144424 2137369 main.go:141] libmachine: (addons-823768) Calling .GetSSHUsername
I0120 15:05:45.144580 2137369 sshutil.go:53] new ssh client: &{IP:192.168.39.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20109-2129584/.minikube/machines/addons-823768/id_rsa Username:docker}
I0120 15:05:45.417776 2137369 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
I0120 15:05:45.471994 2137369 ssh_runner.go:195] Run: sudo systemctl start kubelet
I0120 15:05:45.472017 2137369 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^ forward . \/etc\/resolv.conf.*/i \ hosts {\n 192.168.39.1 host.minikube.internal\n fallthrough\n }' -e '/^ errors *$/i \ log' | sudo /var/lib/minikube/binaries/v1.32.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
I0120 15:05:45.489674 2137369 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
I0120 15:05:45.526555 2137369 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
I0120 15:05:45.527835 2137369 addons.go:435] installing /etc/kubernetes/addons/yakd-sa.yaml
I0120 15:05:45.527865 2137369 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
I0120 15:05:45.550223 2137369 addons.go:435] installing /etc/kubernetes/addons/rbac-hostpath.yaml
I0120 15:05:45.550256 2137369 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
I0120 15:05:45.593768 2137369 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
I0120 15:05:45.603223 2137369 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
I0120 15:05:45.617896 2137369 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
I0120 15:05:45.640784 2137369 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
I0120 15:05:45.640819 2137369 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
I0120 15:05:45.663716 2137369 addons.go:435] installing /etc/kubernetes/addons/ig-deployment.yaml
I0120 15:05:45.663743 2137369 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (14539 bytes)
I0120 15:05:45.677268 2137369 addons.go:435] installing /etc/kubernetes/addons/registry-svc.yaml
I0120 15:05:45.677311 2137369 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
I0120 15:05:45.703833 2137369 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
I0120 15:05:45.703861 2137369 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
I0120 15:05:45.714450 2137369 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
I0120 15:05:45.755610 2137369 addons.go:435] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
I0120 15:05:45.755638 2137369 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
I0120 15:05:45.790857 2137369 addons.go:435] installing /etc/kubernetes/addons/yakd-crb.yaml
I0120 15:05:45.790881 2137369 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
I0120 15:05:45.845887 2137369 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
I0120 15:05:45.887294 2137369 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
I0120 15:05:45.887977 2137369 addons.go:435] installing /etc/kubernetes/addons/registry-proxy.yaml
I0120 15:05:45.888000 2137369 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
I0120 15:05:45.924864 2137369 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
I0120 15:05:45.924896 2137369 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
I0120 15:05:45.925761 2137369 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
I0120 15:05:45.925784 2137369 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
I0120 15:05:45.937497 2137369 addons.go:435] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
I0120 15:05:45.937531 2137369 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
I0120 15:05:46.025842 2137369 addons.go:435] installing /etc/kubernetes/addons/yakd-svc.yaml
I0120 15:05:46.025879 2137369 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
I0120 15:05:46.113217 2137369 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
I0120 15:05:46.142184 2137369 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
I0120 15:05:46.142236 2137369 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
I0120 15:05:46.196187 2137369 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
I0120 15:05:46.196215 2137369 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
I0120 15:05:46.211841 2137369 addons.go:435] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
I0120 15:05:46.211883 2137369 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
I0120 15:05:46.260854 2137369 addons.go:435] installing /etc/kubernetes/addons/yakd-dp.yaml
I0120 15:05:46.260889 2137369 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
I0120 15:05:46.349717 2137369 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
I0120 15:05:46.363946 2137369 addons.go:435] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
I0120 15:05:46.363982 2137369 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
I0120 15:05:46.531940 2137369 addons.go:435] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
I0120 15:05:46.531972 2137369 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
I0120 15:05:46.676731 2137369 addons.go:435] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
I0120 15:05:46.676761 2137369 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
I0120 15:05:46.699767 2137369 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
I0120 15:05:46.911967 2137369 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
I0120 15:05:46.912002 2137369 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
I0120 15:05:47.094846 2137369 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
I0120 15:05:47.136150 2137369 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (1.718325287s)
I0120 15:05:47.136232 2137369 main.go:141] libmachine: Making call to close driver server
I0120 15:05:47.136254 2137369 main.go:141] libmachine: (addons-823768) Calling .Close
I0120 15:05:47.136602 2137369 main.go:141] libmachine: Successfully made call to close driver server
I0120 15:05:47.136623 2137369 main.go:141] libmachine: Making call to close connection to plugin binary
I0120 15:05:47.136638 2137369 main.go:141] libmachine: Making call to close driver server
I0120 15:05:47.136742 2137369 main.go:141] libmachine: (addons-823768) Calling .Close
I0120 15:05:47.137159 2137369 main.go:141] libmachine: Successfully made call to close driver server
I0120 15:05:47.137183 2137369 main.go:141] libmachine: Making call to close connection to plugin binary
I0120 15:05:47.256792 2137369 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
I0120 15:05:47.256827 2137369 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
I0120 15:05:47.600570 2137369 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
I0120 15:05:47.600599 2137369 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
I0120 15:05:48.025069 2137369 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
I0120 15:05:48.025100 2137369 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
I0120 15:05:48.180094 2137369 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (2.708049921s)
I0120 15:05:48.180159 2137369 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^ forward . \/etc\/resolv.conf.*/i \ hosts {\n 192.168.39.1 host.minikube.internal\n fallthrough\n }' -e '/^ errors *$/i \ log' | sudo /var/lib/minikube/binaries/v1.32.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.708105198s)
I0120 15:05:48.180191 2137369 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
I0120 15:05:48.180283 2137369 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.690575165s)
I0120 15:05:48.180339 2137369 main.go:141] libmachine: Making call to close driver server
I0120 15:05:48.180353 2137369 main.go:141] libmachine: (addons-823768) Calling .Close
I0120 15:05:48.180355 2137369 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (2.653759394s)
I0120 15:05:48.180401 2137369 main.go:141] libmachine: Making call to close driver server
I0120 15:05:48.180419 2137369 main.go:141] libmachine: (addons-823768) Calling .Close
I0120 15:05:48.180669 2137369 main.go:141] libmachine: Successfully made call to close driver server
I0120 15:05:48.180685 2137369 main.go:141] libmachine: Making call to close connection to plugin binary
I0120 15:05:48.180696 2137369 main.go:141] libmachine: Making call to close driver server
I0120 15:05:48.180703 2137369 main.go:141] libmachine: (addons-823768) Calling .Close
I0120 15:05:48.180826 2137369 main.go:141] libmachine: (addons-823768) DBG | Closing plugin on server side
I0120 15:05:48.180904 2137369 main.go:141] libmachine: Successfully made call to close driver server
I0120 15:05:48.180930 2137369 main.go:141] libmachine: Making call to close connection to plugin binary
I0120 15:05:48.181183 2137369 node_ready.go:35] waiting up to 6m0s for node "addons-823768" to be "Ready" ...
I0120 15:05:48.181456 2137369 main.go:141] libmachine: Successfully made call to close driver server
I0120 15:05:48.181473 2137369 main.go:141] libmachine: Making call to close connection to plugin binary
I0120 15:05:48.181482 2137369 main.go:141] libmachine: Making call to close driver server
I0120 15:05:48.181494 2137369 main.go:141] libmachine: (addons-823768) Calling .Close
I0120 15:05:48.182413 2137369 main.go:141] libmachine: Successfully made call to close driver server
I0120 15:05:48.182432 2137369 main.go:141] libmachine: Making call to close connection to plugin binary
I0120 15:05:48.182430 2137369 main.go:141] libmachine: (addons-823768) DBG | Closing plugin on server side
I0120 15:05:48.193850 2137369 node_ready.go:49] node "addons-823768" has status "Ready":"True"
I0120 15:05:48.193881 2137369 node_ready.go:38] duration metric: took 12.636766ms for node "addons-823768" to be "Ready" ...
I0120 15:05:48.193893 2137369 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0120 15:05:48.246992 2137369 main.go:141] libmachine: Making call to close driver server
I0120 15:05:48.247119 2137369 main.go:141] libmachine: (addons-823768) Calling .Close
I0120 15:05:48.247468 2137369 main.go:141] libmachine: (addons-823768) DBG | Closing plugin on server side
I0120 15:05:48.247530 2137369 main.go:141] libmachine: Successfully made call to close driver server
I0120 15:05:48.247542 2137369 main.go:141] libmachine: Making call to close connection to plugin binary
I0120 15:05:48.259232 2137369 pod_ready.go:79] waiting up to 6m0s for pod "amd-gpu-device-plugin-hd9wh" in "kube-system" namespace to be "Ready" ...
I0120 15:05:48.283051 2137369 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
I0120 15:05:48.283087 2137369 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
I0120 15:05:48.686812 2137369 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-823768" context rescaled to 1 replicas
I0120 15:05:48.755354 2137369 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
I0120 15:05:50.506941 2137369 pod_ready.go:103] pod "amd-gpu-device-plugin-hd9wh" in "kube-system" namespace has status "Ready":"False"
I0120 15:05:51.355199 2137369 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (5.761376681s)
I0120 15:05:51.355294 2137369 main.go:141] libmachine: Making call to close driver server
I0120 15:05:51.355314 2137369 main.go:141] libmachine: (addons-823768) Calling .Close
I0120 15:05:51.355667 2137369 main.go:141] libmachine: (addons-823768) DBG | Closing plugin on server side
I0120 15:05:51.355754 2137369 main.go:141] libmachine: Successfully made call to close driver server
I0120 15:05:51.355778 2137369 main.go:141] libmachine: Making call to close connection to plugin binary
I0120 15:05:51.355801 2137369 main.go:141] libmachine: Making call to close driver server
I0120 15:05:51.355813 2137369 main.go:141] libmachine: (addons-823768) Calling .Close
I0120 15:05:51.356189 2137369 main.go:141] libmachine: Successfully made call to close driver server
I0120 15:05:51.356205 2137369 main.go:141] libmachine: Making call to close connection to plugin binary
I0120 15:05:51.457453 2137369 main.go:141] libmachine: Making call to close driver server
I0120 15:05:51.457488 2137369 main.go:141] libmachine: (addons-823768) Calling .Close
I0120 15:05:51.457937 2137369 main.go:141] libmachine: (addons-823768) DBG | Closing plugin on server side
I0120 15:05:51.458005 2137369 main.go:141] libmachine: Successfully made call to close driver server
I0120 15:05:51.458029 2137369 main.go:141] libmachine: Making call to close connection to plugin binary
I0120 15:05:51.537693 2137369 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (5.934416648s)
I0120 15:05:51.537784 2137369 main.go:141] libmachine: Making call to close driver server
I0120 15:05:51.537799 2137369 main.go:141] libmachine: (addons-823768) Calling .Close
I0120 15:05:51.538261 2137369 main.go:141] libmachine: Successfully made call to close driver server
I0120 15:05:51.538287 2137369 main.go:141] libmachine: Making call to close connection to plugin binary
I0120 15:05:51.538298 2137369 main.go:141] libmachine: Making call to close driver server
I0120 15:05:51.538307 2137369 main.go:141] libmachine: (addons-823768) Calling .Close
I0120 15:05:51.538535 2137369 main.go:141] libmachine: Successfully made call to close driver server
I0120 15:05:51.538558 2137369 main.go:141] libmachine: Making call to close connection to plugin binary
I0120 15:05:51.538576 2137369 main.go:141] libmachine: (addons-823768) DBG | Closing plugin on server side
I0120 15:05:51.939586 2137369 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
I0120 15:05:51.939639 2137369 main.go:141] libmachine: (addons-823768) Calling .GetSSHHostname
I0120 15:05:51.943517 2137369 main.go:141] libmachine: (addons-823768) DBG | domain addons-823768 has defined MAC address 52:54:00:25:8d:22 in network mk-addons-823768
I0120 15:05:51.944138 2137369 main.go:141] libmachine: (addons-823768) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:8d:22", ip: ""} in network mk-addons-823768: {Iface:virbr1 ExpiryTime:2025-01-20 16:05:13 +0000 UTC Type:0 Mac:52:54:00:25:8d:22 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:addons-823768 Clientid:01:52:54:00:25:8d:22}
I0120 15:05:51.944174 2137369 main.go:141] libmachine: (addons-823768) DBG | domain addons-823768 has defined IP address 192.168.39.158 and MAC address 52:54:00:25:8d:22 in network mk-addons-823768
I0120 15:05:51.944392 2137369 main.go:141] libmachine: (addons-823768) Calling .GetSSHPort
I0120 15:05:51.944662 2137369 main.go:141] libmachine: (addons-823768) Calling .GetSSHKeyPath
I0120 15:05:51.944863 2137369 main.go:141] libmachine: (addons-823768) Calling .GetSSHUsername
I0120 15:05:51.945029 2137369 sshutil.go:53] new ssh client: &{IP:192.168.39.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20109-2129584/.minikube/machines/addons-823768/id_rsa Username:docker}
I0120 15:05:52.359222 2137369 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
I0120 15:05:52.485709 2137369 addons.go:238] Setting addon gcp-auth=true in "addons-823768"
I0120 15:05:52.485795 2137369 host.go:66] Checking if "addons-823768" exists ...
I0120 15:05:52.486338 2137369 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0120 15:05:52.486410 2137369 main.go:141] libmachine: Launching plugin server for driver kvm2
I0120 15:05:52.503565 2137369 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44841
I0120 15:05:52.504038 2137369 main.go:141] libmachine: () Calling .GetVersion
I0120 15:05:52.504670 2137369 main.go:141] libmachine: Using API Version 1
I0120 15:05:52.504702 2137369 main.go:141] libmachine: () Calling .SetConfigRaw
I0120 15:05:52.505075 2137369 main.go:141] libmachine: () Calling .GetMachineName
I0120 15:05:52.505679 2137369 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0120 15:05:52.505728 2137369 main.go:141] libmachine: Launching plugin server for driver kvm2
I0120 15:05:52.521951 2137369 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38643
I0120 15:05:52.522548 2137369 main.go:141] libmachine: () Calling .GetVersion
I0120 15:05:52.523148 2137369 main.go:141] libmachine: Using API Version 1
I0120 15:05:52.523181 2137369 main.go:141] libmachine: () Calling .SetConfigRaw
I0120 15:05:52.523646 2137369 main.go:141] libmachine: () Calling .GetMachineName
I0120 15:05:52.523933 2137369 main.go:141] libmachine: (addons-823768) Calling .GetState
I0120 15:05:52.526028 2137369 main.go:141] libmachine: (addons-823768) Calling .DriverName
I0120 15:05:52.526329 2137369 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
I0120 15:05:52.526368 2137369 main.go:141] libmachine: (addons-823768) Calling .GetSSHHostname
I0120 15:05:52.529896 2137369 main.go:141] libmachine: (addons-823768) DBG | domain addons-823768 has defined MAC address 52:54:00:25:8d:22 in network mk-addons-823768
I0120 15:05:52.530491 2137369 main.go:141] libmachine: (addons-823768) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:25:8d:22", ip: ""} in network mk-addons-823768: {Iface:virbr1 ExpiryTime:2025-01-20 16:05:13 +0000 UTC Type:0 Mac:52:54:00:25:8d:22 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:addons-823768 Clientid:01:52:54:00:25:8d:22}
I0120 15:05:52.530534 2137369 main.go:141] libmachine: (addons-823768) DBG | domain addons-823768 has defined IP address 192.168.39.158 and MAC address 52:54:00:25:8d:22 in network mk-addons-823768
I0120 15:05:52.530704 2137369 main.go:141] libmachine: (addons-823768) Calling .GetSSHPort
I0120 15:05:52.530923 2137369 main.go:141] libmachine: (addons-823768) Calling .GetSSHKeyPath
I0120 15:05:52.531085 2137369 main.go:141] libmachine: (addons-823768) Calling .GetSSHUsername
I0120 15:05:52.531247 2137369 sshutil.go:53] new ssh client: &{IP:192.168.39.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20109-2129584/.minikube/machines/addons-823768/id_rsa Username:docker}
I0120 15:05:52.803206 2137369 pod_ready.go:103] pod "amd-gpu-device-plugin-hd9wh" in "kube-system" namespace has status "Ready":"False"
I0120 15:05:53.218889 2137369 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (7.600943277s)
I0120 15:05:53.218965 2137369 main.go:141] libmachine: Making call to close driver server
I0120 15:05:53.218981 2137369 main.go:141] libmachine: (addons-823768) Calling .Close
I0120 15:05:53.218975 2137369 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (7.504485364s)
I0120 15:05:53.219043 2137369 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (7.373117907s)
I0120 15:05:53.219086 2137369 main.go:141] libmachine: Making call to close driver server
I0120 15:05:53.219102 2137369 main.go:141] libmachine: (addons-823768) Calling .Close
I0120 15:05:53.219059 2137369 main.go:141] libmachine: Making call to close driver server
I0120 15:05:53.219150 2137369 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (7.331817088s)
I0120 15:05:53.219162 2137369 main.go:141] libmachine: (addons-823768) Calling .Close
I0120 15:05:53.219185 2137369 main.go:141] libmachine: Making call to close driver server
I0120 15:05:53.219205 2137369 main.go:141] libmachine: (addons-823768) Calling .Close
I0120 15:05:53.219246 2137369 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (7.105968154s)
I0120 15:05:53.219283 2137369 main.go:141] libmachine: Making call to close driver server
I0120 15:05:53.219298 2137369 main.go:141] libmachine: (addons-823768) Calling .Close
I0120 15:05:53.219406 2137369 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (6.869651037s)
I0120 15:05:53.219441 2137369 main.go:141] libmachine: Making call to close driver server
I0120 15:05:53.219452 2137369 main.go:141] libmachine: (addons-823768) Calling .Close
I0120 15:05:53.219549 2137369 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (6.519750539s)
I0120 15:05:53.219567 2137369 main.go:141] libmachine: Making call to close driver server
I0120 15:05:53.219576 2137369 main.go:141] libmachine: (addons-823768) Calling .Close
I0120 15:05:53.219695 2137369 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (6.124796876s)
W0120 15:05:53.219739 2137369 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
stdout:
customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
serviceaccount/snapshot-controller created
clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
deployment.apps/snapshot-controller created
stderr:
error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
ensure CRDs are installed first
I0120 15:05:53.219749 2137369 main.go:141] libmachine: Successfully made call to close driver server
I0120 15:05:53.219750 2137369 main.go:141] libmachine: (addons-823768) DBG | Closing plugin on server side
I0120 15:05:53.219761 2137369 main.go:141] libmachine: Making call to close connection to plugin binary
I0120 15:05:53.219774 2137369 main.go:141] libmachine: Making call to close driver server
I0120 15:05:53.219784 2137369 main.go:141] libmachine: (addons-823768) DBG | Closing plugin on server side
I0120 15:05:53.219786 2137369 main.go:141] libmachine: (addons-823768) Calling .Close
I0120 15:05:53.219802 2137369 main.go:141] libmachine: (addons-823768) DBG | Closing plugin on server side
I0120 15:05:53.219784 2137369 retry.go:31] will retry after 298.07171ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
stdout:
customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
serviceaccount/snapshot-controller created
clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
deployment.apps/snapshot-controller created
stderr:
error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
ensure CRDs are installed first
I0120 15:05:53.219827 2137369 main.go:141] libmachine: (addons-823768) DBG | Closing plugin on server side
I0120 15:05:53.219835 2137369 main.go:141] libmachine: Successfully made call to close driver server
I0120 15:05:53.219830 2137369 main.go:141] libmachine: Successfully made call to close driver server
I0120 15:05:53.219847 2137369 main.go:141] libmachine: Successfully made call to close driver server
I0120 15:05:53.219851 2137369 main.go:141] libmachine: Making call to close connection to plugin binary
I0120 15:05:53.219856 2137369 main.go:141] libmachine: Making call to close connection to plugin binary
I0120 15:05:53.219857 2137369 main.go:141] libmachine: Making call to close connection to plugin binary
I0120 15:05:53.219861 2137369 main.go:141] libmachine: Making call to close driver server
I0120 15:05:53.219868 2137369 main.go:141] libmachine: (addons-823768) Calling .Close
I0120 15:05:53.219885 2137369 main.go:141] libmachine: (addons-823768) DBG | Closing plugin on server side
I0120 15:05:53.219885 2137369 main.go:141] libmachine: Successfully made call to close driver server
I0120 15:05:53.219896 2137369 main.go:141] libmachine: Making call to close connection to plugin binary
I0120 15:05:53.219905 2137369 main.go:141] libmachine: Making call to close driver server
I0120 15:05:53.219868 2137369 main.go:141] libmachine: Making call to close driver server
I0120 15:05:53.219912 2137369 main.go:141] libmachine: (addons-823768) Calling .Close
I0120 15:05:53.219916 2137369 main.go:141] libmachine: (addons-823768) Calling .Close
I0120 15:05:53.219931 2137369 main.go:141] libmachine: Successfully made call to close driver server
I0120 15:05:53.219940 2137369 main.go:141] libmachine: Making call to close connection to plugin binary
I0120 15:05:53.219909 2137369 main.go:141] libmachine: (addons-823768) DBG | Closing plugin on server side
I0120 15:05:53.219947 2137369 main.go:141] libmachine: Making call to close driver server
I0120 15:05:53.219953 2137369 main.go:141] libmachine: (addons-823768) Calling .Close
I0120 15:05:53.220005 2137369 main.go:141] libmachine: (addons-823768) DBG | Closing plugin on server side
I0120 15:05:53.220026 2137369 main.go:141] libmachine: Successfully made call to close driver server
I0120 15:05:53.220032 2137369 main.go:141] libmachine: Making call to close connection to plugin binary
I0120 15:05:53.220039 2137369 main.go:141] libmachine: Making call to close driver server
I0120 15:05:53.220045 2137369 main.go:141] libmachine: (addons-823768) Calling .Close
I0120 15:05:53.220117 2137369 main.go:141] libmachine: Successfully made call to close driver server
I0120 15:05:53.220129 2137369 main.go:141] libmachine: Making call to close connection to plugin binary
I0120 15:05:53.220211 2137369 main.go:141] libmachine: Making call to close driver server
I0120 15:05:53.220226 2137369 main.go:141] libmachine: (addons-823768) Calling .Close
I0120 15:05:53.221944 2137369 main.go:141] libmachine: (addons-823768) DBG | Closing plugin on server side
I0120 15:05:53.222004 2137369 main.go:141] libmachine: Successfully made call to close driver server
I0120 15:05:53.222013 2137369 main.go:141] libmachine: Making call to close connection to plugin binary
I0120 15:05:53.222022 2137369 addons.go:479] Verifying addon ingress=true in "addons-823768"
I0120 15:05:53.222034 2137369 main.go:141] libmachine: (addons-823768) DBG | Closing plugin on server side
I0120 15:05:53.222059 2137369 main.go:141] libmachine: Successfully made call to close driver server
I0120 15:05:53.222065 2137369 main.go:141] libmachine: Making call to close connection to plugin binary
I0120 15:05:53.222071 2137369 addons.go:479] Verifying addon registry=true in "addons-823768"
I0120 15:05:53.222245 2137369 main.go:141] libmachine: (addons-823768) DBG | Closing plugin on server side
I0120 15:05:53.222266 2137369 main.go:141] libmachine: Successfully made call to close driver server
I0120 15:05:53.222270 2137369 main.go:141] libmachine: Making call to close connection to plugin binary
I0120 15:05:53.222283 2137369 main.go:141] libmachine: Successfully made call to close driver server
I0120 15:05:53.222298 2137369 main.go:141] libmachine: Making call to close connection to plugin binary
I0120 15:05:53.222513 2137369 main.go:141] libmachine: (addons-823768) DBG | Closing plugin on server side
I0120 15:05:53.222673 2137369 main.go:141] libmachine: Successfully made call to close driver server
I0120 15:05:53.222687 2137369 main.go:141] libmachine: Making call to close connection to plugin binary
I0120 15:05:53.223993 2137369 out.go:177] * Verifying ingress addon...
I0120 15:05:53.224095 2137369 out.go:177] * Verifying registry addon...
I0120 15:05:53.224118 2137369 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
minikube -p addons-823768 service yakd-dashboard -n yakd-dashboard
I0120 15:05:53.225572 2137369 main.go:141] libmachine: Successfully made call to close driver server
I0120 15:05:53.225592 2137369 main.go:141] libmachine: Making call to close connection to plugin binary
I0120 15:05:53.225602 2137369 addons.go:479] Verifying addon metrics-server=true in "addons-823768"
I0120 15:05:53.225606 2137369 main.go:141] libmachine: (addons-823768) DBG | Closing plugin on server side
I0120 15:05:53.226182 2137369 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
I0120 15:05:53.226205 2137369 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
I0120 15:05:53.266543 2137369 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
I0120 15:05:53.266570 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0120 15:05:53.270246 2137369 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
I0120 15:05:53.270273 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0120 15:05:53.518952 2137369 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
I0120 15:05:53.732321 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0120 15:05:53.733564 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0120 15:05:54.236310 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0120 15:05:54.238382 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0120 15:05:54.734271 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0120 15:05:54.734269 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0120 15:05:55.254274 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0120 15:05:55.254786 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0120 15:05:55.344689 2137369 pod_ready.go:103] pod "amd-gpu-device-plugin-hd9wh" in "kube-system" namespace has status "Ready":"False"
I0120 15:05:55.500757 2137369 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (6.745335252s)
I0120 15:05:55.500816 2137369 main.go:141] libmachine: Making call to close driver server
I0120 15:05:55.500835 2137369 main.go:141] libmachine: (addons-823768) Calling .Close
I0120 15:05:55.500856 2137369 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.974495908s)
I0120 15:05:55.501209 2137369 main.go:141] libmachine: Successfully made call to close driver server
I0120 15:05:55.501237 2137369 main.go:141] libmachine: Making call to close connection to plugin binary
I0120 15:05:55.501248 2137369 main.go:141] libmachine: Making call to close driver server
I0120 15:05:55.501260 2137369 main.go:141] libmachine: (addons-823768) Calling .Close
I0120 15:05:55.501495 2137369 main.go:141] libmachine: Successfully made call to close driver server
I0120 15:05:55.501519 2137369 main.go:141] libmachine: Making call to close connection to plugin binary
I0120 15:05:55.501544 2137369 addons.go:479] Verifying addon csi-hostpath-driver=true in "addons-823768"
I0120 15:05:55.502730 2137369 out.go:177] * Verifying csi-hostpath-driver addon...
I0120 15:05:55.502734 2137369 out.go:177] - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
I0120 15:05:55.504940 2137369 out.go:177] - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
I0120 15:05:55.505554 2137369 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I0120 15:05:55.506259 2137369 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
I0120 15:05:55.506279 2137369 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
I0120 15:05:55.575714 2137369 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I0120 15:05:55.575753 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0120 15:05:55.671030 2137369 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-service.yaml
I0120 15:05:55.671060 2137369 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
I0120 15:05:55.724863 2137369 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
I0120 15:05:55.724895 2137369 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
I0120 15:05:55.730623 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0120 15:05:55.733038 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0120 15:05:55.782983 2137369 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
I0120 15:05:56.011925 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0120 15:05:56.234678 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0120 15:05:56.234948 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0120 15:05:56.512779 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0120 15:05:56.731710 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0120 15:05:56.731852 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0120 15:05:56.797148 2137369 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (3.278133853s)
I0120 15:05:56.797216 2137369 main.go:141] libmachine: Making call to close driver server
I0120 15:05:56.797235 2137369 main.go:141] libmachine: (addons-823768) Calling .Close
I0120 15:05:56.797528 2137369 main.go:141] libmachine: Successfully made call to close driver server
I0120 15:05:56.797547 2137369 main.go:141] libmachine: Making call to close connection to plugin binary
I0120 15:05:56.797556 2137369 main.go:141] libmachine: Making call to close driver server
I0120 15:05:56.797563 2137369 main.go:141] libmachine: (addons-823768) Calling .Close
I0120 15:05:56.797791 2137369 main.go:141] libmachine: Successfully made call to close driver server
I0120 15:05:56.797809 2137369 main.go:141] libmachine: Making call to close connection to plugin binary
I0120 15:05:56.797822 2137369 main.go:141] libmachine: (addons-823768) DBG | Closing plugin on server side
I0120 15:05:57.010804 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0120 15:05:57.233033 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0120 15:05:57.233289 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0120 15:05:57.542025 2137369 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.758981808s)
I0120 15:05:57.542087 2137369 main.go:141] libmachine: Making call to close driver server
I0120 15:05:57.542105 2137369 main.go:141] libmachine: (addons-823768) Calling .Close
I0120 15:05:57.542525 2137369 main.go:141] libmachine: Successfully made call to close driver server
I0120 15:05:57.542544 2137369 main.go:141] libmachine: Making call to close connection to plugin binary
I0120 15:05:57.542553 2137369 main.go:141] libmachine: Making call to close driver server
I0120 15:05:57.542551 2137369 main.go:141] libmachine: (addons-823768) DBG | Closing plugin on server side
I0120 15:05:57.542560 2137369 main.go:141] libmachine: (addons-823768) Calling .Close
I0120 15:05:57.542800 2137369 main.go:141] libmachine: (addons-823768) DBG | Closing plugin on server side
I0120 15:05:57.542813 2137369 main.go:141] libmachine: Successfully made call to close driver server
I0120 15:05:57.542824 2137369 main.go:141] libmachine: Making call to close connection to plugin binary
I0120 15:05:57.543638 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0120 15:05:57.544206 2137369 addons.go:479] Verifying addon gcp-auth=true in "addons-823768"
I0120 15:05:57.546233 2137369 out.go:177] * Verifying gcp-auth addon...
I0120 15:05:57.548167 2137369 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
I0120 15:05:57.608035 2137369 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
I0120 15:05:57.608063 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0120 15:05:57.757556 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0120 15:05:57.758358 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0120 15:05:57.801308 2137369 pod_ready.go:103] pod "amd-gpu-device-plugin-hd9wh" in "kube-system" namespace has status "Ready":"False"
I0120 15:05:58.017964 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0120 15:05:58.054047 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0120 15:05:58.232134 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0120 15:05:58.232349 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0120 15:05:58.511487 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0120 15:05:58.552181 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0120 15:05:58.732397 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0120 15:05:58.732613 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0120 15:05:59.009728 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0120 15:05:59.052751 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0120 15:05:59.232207 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0120 15:05:59.232938 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0120 15:05:59.511390 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0120 15:05:59.552558 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0120 15:05:59.732339 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0120 15:05:59.733137 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0120 15:06:00.011192 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0120 15:06:00.052027 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0120 15:06:00.230588 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0120 15:06:00.230983 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0120 15:06:00.265616 2137369 pod_ready.go:103] pod "amd-gpu-device-plugin-hd9wh" in "kube-system" namespace has status "Ready":"False"
I0120 15:06:00.512889 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0120 15:06:00.553312 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0120 15:06:00.731731 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0120 15:06:00.732346 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0120 15:06:01.010060 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0120 15:06:01.052209 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0120 15:06:01.230828 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0120 15:06:01.231470 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0120 15:06:01.535707 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0120 15:06:01.552390 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0120 15:06:01.731636 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0120 15:06:01.732100 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0120 15:06:02.011594 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0120 15:06:02.052516 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0120 15:06:02.231481 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0120 15:06:02.231556 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0120 15:06:02.512089 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0120 15:06:02.552231 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0120 15:06:02.732082 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0120 15:06:02.733140 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0120 15:06:02.765831 2137369 pod_ready.go:103] pod "amd-gpu-device-plugin-hd9wh" in "kube-system" namespace has status "Ready":"False"
I0120 15:06:03.010323 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0120 15:06:03.051618 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0120 15:06:03.232259 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0120 15:06:03.232419 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0120 15:06:03.511102 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0120 15:06:03.552477 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0120 15:06:03.731997 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0120 15:06:03.732052 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0120 15:06:04.012261 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0120 15:06:04.052901 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0120 15:06:04.231169 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0120 15:06:04.231374 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0120 15:06:04.659683 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0120 15:06:04.660505 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0120 15:06:04.731959 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0120 15:06:04.732234 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0120 15:06:04.767143 2137369 pod_ready.go:103] pod "amd-gpu-device-plugin-hd9wh" in "kube-system" namespace has status "Ready":"False"
I0120 15:06:05.010349 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0120 15:06:05.051978 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0120 15:06:05.231135 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0120 15:06:05.231273 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0120 15:06:05.512111 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0120 15:06:05.552863 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0120 15:06:05.732208 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0120 15:06:05.732591 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0120 15:06:06.011307 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0120 15:06:06.052476 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0120 15:06:06.232498 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0120 15:06:06.233241 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0120 15:06:06.510827 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0120 15:06:06.552061 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0120 15:06:06.981709 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0120 15:06:06.986582 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0120 15:06:06.990553 2137369 pod_ready.go:103] pod "amd-gpu-device-plugin-hd9wh" in "kube-system" namespace has status "Ready":"False"
I0120 15:06:07.011207 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0120 15:06:07.052592 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0120 15:06:07.231024 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0120 15:06:07.231680 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0120 15:06:07.511630 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0120 15:06:07.551889 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0120 15:06:07.731928 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0120 15:06:07.732524 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0120 15:06:08.011481 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0120 15:06:08.051943 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0120 15:06:08.232305 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0120 15:06:08.232698 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0120 15:06:08.510640 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0120 15:06:08.552388 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0120 15:06:08.730939 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0120 15:06:08.733311 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0120 15:06:09.011242 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0120 15:06:09.052916 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0120 15:06:09.231309 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0120 15:06:09.232010 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0120 15:06:09.266856 2137369 pod_ready.go:103] pod "amd-gpu-device-plugin-hd9wh" in "kube-system" namespace has status "Ready":"False"
I0120 15:06:09.513742 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0120 15:06:09.551779 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0120 15:06:09.730962 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0120 15:06:09.731230 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0120 15:06:10.010833 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0120 15:06:10.051663 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0120 15:06:10.231411 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0120 15:06:10.232988 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0120 15:06:10.511809 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0120 15:06:10.552186 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0120 15:06:10.732270 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0120 15:06:10.733127 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0120 15:06:11.347386 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0120 15:06:11.359078 2137369 pod_ready.go:103] pod "amd-gpu-device-plugin-hd9wh" in "kube-system" namespace has status "Ready":"False"
I0120 15:06:11.446014 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0120 15:06:11.446575 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0120 15:06:11.446659 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0120 15:06:11.547362 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0120 15:06:11.554123 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0120 15:06:11.732018 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0120 15:06:11.732027 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0120 15:06:12.011787 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0120 15:06:12.051888 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0120 15:06:12.233138 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0120 15:06:12.233471 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0120 15:06:12.511105 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0120 15:06:12.552912 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0120 15:06:12.731592 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0120 15:06:12.733254 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0120 15:06:12.765446 2137369 pod_ready.go:93] pod "amd-gpu-device-plugin-hd9wh" in "kube-system" namespace has status "Ready":"True"
I0120 15:06:12.765473 2137369 pod_ready.go:82] duration metric: took 24.50620135s for pod "amd-gpu-device-plugin-hd9wh" in "kube-system" namespace to be "Ready" ...
I0120 15:06:12.765484 2137369 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-5vcsv" in "kube-system" namespace to be "Ready" ...
I0120 15:06:12.772118 2137369 pod_ready.go:93] pod "coredns-668d6bf9bc-5vcsv" in "kube-system" namespace has status "Ready":"True"
I0120 15:06:12.772143 2137369 pod_ready.go:82] duration metric: took 6.652598ms for pod "coredns-668d6bf9bc-5vcsv" in "kube-system" namespace to be "Ready" ...
I0120 15:06:12.772152 2137369 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-p59mv" in "kube-system" namespace to be "Ready" ...
I0120 15:06:12.774084 2137369 pod_ready.go:98] error getting pod "coredns-668d6bf9bc-p59mv" in "kube-system" namespace (skipping!): pods "coredns-668d6bf9bc-p59mv" not found
I0120 15:06:12.774108 2137369 pod_ready.go:82] duration metric: took 1.950369ms for pod "coredns-668d6bf9bc-p59mv" in "kube-system" namespace to be "Ready" ...
E0120 15:06:12.774119 2137369 pod_ready.go:67] WaitExtra: waitPodCondition: error getting pod "coredns-668d6bf9bc-p59mv" in "kube-system" namespace (skipping!): pods "coredns-668d6bf9bc-p59mv" not found
I0120 15:06:12.774125 2137369 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-823768" in "kube-system" namespace to be "Ready" ...
I0120 15:06:12.779574 2137369 pod_ready.go:93] pod "etcd-addons-823768" in "kube-system" namespace has status "Ready":"True"
I0120 15:06:12.779594 2137369 pod_ready.go:82] duration metric: took 5.463343ms for pod "etcd-addons-823768" in "kube-system" namespace to be "Ready" ...
I0120 15:06:12.779604 2137369 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-823768" in "kube-system" namespace to be "Ready" ...
I0120 15:06:12.786673 2137369 pod_ready.go:93] pod "kube-apiserver-addons-823768" in "kube-system" namespace has status "Ready":"True"
I0120 15:06:12.786695 2137369 pod_ready.go:82] duration metric: took 7.084094ms for pod "kube-apiserver-addons-823768" in "kube-system" namespace to be "Ready" ...
I0120 15:06:12.786705 2137369 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-823768" in "kube-system" namespace to be "Ready" ...
I0120 15:06:12.964107 2137369 pod_ready.go:93] pod "kube-controller-manager-addons-823768" in "kube-system" namespace has status "Ready":"True"
I0120 15:06:12.964143 2137369 pod_ready.go:82] duration metric: took 177.429563ms for pod "kube-controller-manager-addons-823768" in "kube-system" namespace to be "Ready" ...
I0120 15:06:12.964159 2137369 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-7rvmm" in "kube-system" namespace to be "Ready" ...
I0120 15:06:13.010809 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0120 15:06:13.052318 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0120 15:06:13.231805 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0120 15:06:13.232197 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0120 15:06:13.364952 2137369 pod_ready.go:93] pod "kube-proxy-7rvmm" in "kube-system" namespace has status "Ready":"True"
I0120 15:06:13.364991 2137369 pod_ready.go:82] duration metric: took 400.822729ms for pod "kube-proxy-7rvmm" in "kube-system" namespace to be "Ready" ...
I0120 15:06:13.365008 2137369 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-823768" in "kube-system" namespace to be "Ready" ...
I0120 15:06:13.510667 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0120 15:06:13.551664 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0120 15:06:13.732398 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0120 15:06:13.733063 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0120 15:06:13.763469 2137369 pod_ready.go:93] pod "kube-scheduler-addons-823768" in "kube-system" namespace has status "Ready":"True"
I0120 15:06:13.763497 2137369 pod_ready.go:82] duration metric: took 398.480559ms for pod "kube-scheduler-addons-823768" in "kube-system" namespace to be "Ready" ...
I0120 15:06:13.763510 2137369 pod_ready.go:79] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-nbm5g" in "kube-system" namespace to be "Ready" ...
I0120 15:06:14.011840 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0120 15:06:14.052929 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0120 15:06:14.164972 2137369 pod_ready.go:93] pod "nvidia-device-plugin-daemonset-nbm5g" in "kube-system" namespace has status "Ready":"True"
I0120 15:06:14.165004 2137369 pod_ready.go:82] duration metric: took 401.486108ms for pod "nvidia-device-plugin-daemonset-nbm5g" in "kube-system" namespace to be "Ready" ...
I0120 15:06:14.165013 2137369 pod_ready.go:39] duration metric: took 25.971110211s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0120 15:06:14.165032 2137369 api_server.go:52] waiting for apiserver process to appear ...
I0120 15:06:14.165104 2137369 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0120 15:06:14.200902 2137369 api_server.go:72] duration metric: took 29.259888219s to wait for apiserver process to appear ...
I0120 15:06:14.200940 2137369 api_server.go:88] waiting for apiserver healthz status ...
I0120 15:06:14.200966 2137369 api_server.go:253] Checking apiserver healthz at https://192.168.39.158:8443/healthz ...
I0120 15:06:14.206516 2137369 api_server.go:279] https://192.168.39.158:8443/healthz returned 200:
ok
I0120 15:06:14.207753 2137369 api_server.go:141] control plane version: v1.32.0
I0120 15:06:14.207791 2137369 api_server.go:131] duration metric: took 6.841209ms to wait for apiserver health ...
I0120 15:06:14.207804 2137369 system_pods.go:43] waiting for kube-system pods to appear ...
I0120 15:06:14.233265 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0120 15:06:14.234965 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0120 15:06:14.370097 2137369 system_pods.go:59] 18 kube-system pods found
I0120 15:06:14.370150 2137369 system_pods.go:61] "amd-gpu-device-plugin-hd9wh" [74d848dc-f26d-43fe-8a5a-a0df1659422e] Running
I0120 15:06:14.370159 2137369 system_pods.go:61] "coredns-668d6bf9bc-5vcsv" [07cf3526-d1a7-45e9-a4b0-843c4c5d8087] Running
I0120 15:06:14.370170 2137369 system_pods.go:61] "csi-hostpath-attacher-0" [116b9f15-1304-49fb-9076-931a2afbb254] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
I0120 15:06:14.370182 2137369 system_pods.go:61] "csi-hostpath-resizer-0" [ff9ae680-66e0-4d97-a31f-401bc2303326] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
I0120 15:06:14.370193 2137369 system_pods.go:61] "csi-hostpathplugin-gnx78" [c749cfac-9a22-4577-9180-7c6720645ff1] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
I0120 15:06:14.370201 2137369 system_pods.go:61] "etcd-addons-823768" [08fad36c-a2d6-4155-b601-6f4e7384579b] Running
I0120 15:06:14.370206 2137369 system_pods.go:61] "kube-apiserver-addons-823768" [59da341e-91d6-4346-9d34-8ef1d3cc6f8f] Running
I0120 15:06:14.370212 2137369 system_pods.go:61] "kube-controller-manager-addons-823768" [d40a64ff-5eba-4184-ad41-8134c3107af4] Running
I0120 15:06:14.370220 2137369 system_pods.go:61] "kube-ingress-dns-minikube" [c004e6ed-e3c7-41fb-81db-143b10c8e7be] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
I0120 15:06:14.370228 2137369 system_pods.go:61] "kube-proxy-7rvmm" [ad2f5c6d-b93f-4390-876b-33132993d790] Running
I0120 15:06:14.370235 2137369 system_pods.go:61] "kube-scheduler-addons-823768" [2baca71e-3466-46ff-88cc-4c21ff431e5e] Running
I0120 15:06:14.370244 2137369 system_pods.go:61] "metrics-server-7fbb699795-9st7r" [6298e5c1-be6a-46ae-ab5f-36c0273b0dfb] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
I0120 15:06:14.370253 2137369 system_pods.go:61] "nvidia-device-plugin-daemonset-nbm5g" [cef6725a-67fd-465e-abee-d71f4159ef92] Running
I0120 15:06:14.370263 2137369 system_pods.go:61] "registry-6c86875c6f-zjrvn" [0eff11df-e7ff-4331-8d40-9b86a497286d] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
I0120 15:06:14.370271 2137369 system_pods.go:61] "registry-proxy-s6v6f" [fd22a4f5-094c-4b62-a18c-cb9b1478e55f] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
I0120 15:06:14.370303 2137369 system_pods.go:61] "snapshot-controller-68b874b76f-v9qfd" [9f5c996f-6eab-461e-ab1b-cd3349dd28b6] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
I0120 15:06:14.370312 2137369 system_pods.go:61] "snapshot-controller-68b874b76f-wz6d5" [cacd7ffe-a681-4acf-96f8-18ef261221a0] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
I0120 15:06:14.370317 2137369 system_pods.go:61] "storage-provisioner" [0e778f21-8d84-4dd3-a4d5-1d838a0c732a] Running
I0120 15:06:14.370328 2137369 system_pods.go:74] duration metric: took 162.516641ms to wait for pod list to return data ...
I0120 15:06:14.370343 2137369 default_sa.go:34] waiting for default service account to be created ...
I0120 15:06:14.509778 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0120 15:06:14.552297 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0120 15:06:14.563348 2137369 default_sa.go:45] found service account: "default"
I0120 15:06:14.563381 2137369 default_sa.go:55] duration metric: took 193.030729ms for default service account to be created ...
I0120 15:06:14.563393 2137369 system_pods.go:137] waiting for k8s-apps to be running ...
I0120 15:06:14.730162 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0120 15:06:14.730276 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0120 15:06:14.769176 2137369 system_pods.go:87] 18 kube-system pods found
I0120 15:06:14.964028 2137369 system_pods.go:105] "amd-gpu-device-plugin-hd9wh" [74d848dc-f26d-43fe-8a5a-a0df1659422e] Running
I0120 15:06:14.964091 2137369 system_pods.go:105] "coredns-668d6bf9bc-5vcsv" [07cf3526-d1a7-45e9-a4b0-843c4c5d8087] Running
I0120 15:06:14.964101 2137369 system_pods.go:105] "csi-hostpath-attacher-0" [116b9f15-1304-49fb-9076-931a2afbb254] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
I0120 15:06:14.964108 2137369 system_pods.go:105] "csi-hostpath-resizer-0" [ff9ae680-66e0-4d97-a31f-401bc2303326] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
I0120 15:06:14.964121 2137369 system_pods.go:105] "csi-hostpathplugin-gnx78" [c749cfac-9a22-4577-9180-7c6720645ff1] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
I0120 15:06:14.964126 2137369 system_pods.go:105] "etcd-addons-823768" [08fad36c-a2d6-4155-b601-6f4e7384579b] Running
I0120 15:06:14.964133 2137369 system_pods.go:105] "kube-apiserver-addons-823768" [59da341e-91d6-4346-9d34-8ef1d3cc6f8f] Running
I0120 15:06:14.964141 2137369 system_pods.go:105] "kube-controller-manager-addons-823768" [d40a64ff-5eba-4184-ad41-8134c3107af4] Running
I0120 15:06:14.964148 2137369 system_pods.go:105] "kube-ingress-dns-minikube" [c004e6ed-e3c7-41fb-81db-143b10c8e7be] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
I0120 15:06:14.964153 2137369 system_pods.go:105] "kube-proxy-7rvmm" [ad2f5c6d-b93f-4390-876b-33132993d790] Running
I0120 15:06:14.964160 2137369 system_pods.go:105] "kube-scheduler-addons-823768" [2baca71e-3466-46ff-88cc-4c21ff431e5e] Running
I0120 15:06:14.964166 2137369 system_pods.go:105] "metrics-server-7fbb699795-9st7r" [6298e5c1-be6a-46ae-ab5f-36c0273b0dfb] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
I0120 15:06:14.964173 2137369 system_pods.go:105] "nvidia-device-plugin-daemonset-nbm5g" [cef6725a-67fd-465e-abee-d71f4159ef92] Running
I0120 15:06:14.964180 2137369 system_pods.go:105] "registry-6c86875c6f-zjrvn" [0eff11df-e7ff-4331-8d40-9b86a497286d] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
I0120 15:06:14.964186 2137369 system_pods.go:105] "registry-proxy-s6v6f" [fd22a4f5-094c-4b62-a18c-cb9b1478e55f] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
I0120 15:06:14.964197 2137369 system_pods.go:105] "snapshot-controller-68b874b76f-v9qfd" [9f5c996f-6eab-461e-ab1b-cd3349dd28b6] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
I0120 15:06:14.964205 2137369 system_pods.go:105] "snapshot-controller-68b874b76f-wz6d5" [cacd7ffe-a681-4acf-96f8-18ef261221a0] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
I0120 15:06:14.964210 2137369 system_pods.go:105] "storage-provisioner" [0e778f21-8d84-4dd3-a4d5-1d838a0c732a] Running
I0120 15:06:14.964220 2137369 system_pods.go:147] duration metric: took 400.820113ms to wait for k8s-apps to be running ...
I0120 15:06:14.964230 2137369 system_svc.go:44] waiting for kubelet service to be running ....
I0120 15:06:14.964284 2137369 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I0120 15:06:15.004824 2137369 system_svc.go:56] duration metric: took 40.572241ms WaitForService to wait for kubelet
I0120 15:06:15.004866 2137369 kubeadm.go:582] duration metric: took 30.063861442s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I0120 15:06:15.004901 2137369 node_conditions.go:102] verifying NodePressure condition ...
I0120 15:06:15.009936 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0120 15:06:15.052242 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0120 15:06:15.164145 2137369 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
I0120 15:06:15.164177 2137369 node_conditions.go:123] node cpu capacity is 2
I0120 15:06:15.164191 2137369 node_conditions.go:105] duration metric: took 159.284808ms to run NodePressure ...
I0120 15:06:15.164204 2137369 start.go:241] waiting for startup goroutines ...
I0120 15:06:15.230651 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0120 15:06:15.230956 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0120 15:06:15.510392 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0120 15:06:15.552212 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0120 15:06:15.732107 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0120 15:06:15.732654 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0120 15:06:16.010121 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0120 15:06:16.053180 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0120 15:06:16.232364 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0120 15:06:16.232798 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0120 15:06:16.511718 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0120 15:06:16.552275 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0120 15:06:16.731858 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0120 15:06:16.732386 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0120 15:06:17.010874 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0120 15:06:17.051623 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0120 15:06:17.231000 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0120 15:06:17.232412 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0120 15:06:17.510781 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0120 15:06:17.552061 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0120 15:06:17.733072 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0120 15:06:17.733322 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0120 15:06:18.010148 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0120 15:06:18.051291 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0120 15:06:18.232422 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0120 15:06:18.232743 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0120 15:06:18.512092 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0120 15:06:18.552325 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0120 15:06:18.731432 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0120 15:06:18.731830 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0120 15:06:19.279723 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0120 15:06:19.279804 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0120 15:06:19.280003 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0120 15:06:19.280489 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0120 15:06:19.510898 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0120 15:06:19.552506 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0120 15:06:19.730594 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0120 15:06:19.731187 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0120 15:06:20.010579 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0120 15:06:20.052401 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0120 15:06:20.230980 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0120 15:06:20.231222 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0120 15:06:20.510335 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0120 15:06:20.551579 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0120 15:06:20.731061 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0120 15:06:20.731252 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0120 15:06:21.010169 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0120 15:06:21.052654 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0120 15:06:21.229930 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0120 15:06:21.230449 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0120 15:06:21.510623 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0120 15:06:21.552046 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0120 15:06:21.731181 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0120 15:06:21.731380 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0120 15:06:22.011222 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0120 15:06:22.052269 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0120 15:06:22.231033 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0120 15:06:22.232123 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0120 15:06:22.510785 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0120 15:06:22.610273 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0120 15:06:22.731847 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0120 15:06:22.732017 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0120 15:06:23.010161 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0120 15:06:23.051461 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0120 15:06:23.232240 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0120 15:06:23.232266 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0120 15:06:23.511226 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0120 15:06:23.552179 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0120 15:06:23.732405 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0120 15:06:23.732643 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0120 15:06:24.010952 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0120 15:06:24.052795 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0120 15:06:24.231556 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0120 15:06:24.231982 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0120 15:06:24.509972 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0120 15:06:24.551620 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0120 15:06:24.730311 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0120 15:06:24.730951 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0120 15:06:25.011086 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0120 15:06:25.051840 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0120 15:06:25.236485 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0120 15:06:25.237121 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0120 15:06:25.513580 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0120 15:06:25.551665 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0120 15:06:25.744969 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0120 15:06:25.745049 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0120 15:06:26.014786 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0120 15:06:26.054803 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0120 15:06:26.240066 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0120 15:06:26.240329 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0120 15:06:26.510110 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0120 15:06:26.552530 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0120 15:06:26.737921 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0120 15:06:26.743487 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0120 15:06:27.013212 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0120 15:06:27.055873 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0120 15:06:27.231178 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0120 15:06:27.233505 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0120 15:06:27.512769 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0120 15:06:27.551845 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0120 15:06:27.731474 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0120 15:06:27.731923 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0120 15:06:28.010313 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0120 15:06:28.052515 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0120 15:06:28.231843 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0120 15:06:28.232624 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0120 15:06:28.511885 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0120 15:06:28.552260 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0120 15:06:28.732295 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0120 15:06:28.732302 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0120 15:06:29.012191 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0120 15:06:29.052216 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0120 15:06:29.232422 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0120 15:06:29.232716 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0120 15:06:29.511140 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0120 15:06:29.662141 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0120 15:06:29.737287 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0120 15:06:29.737522 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0120 15:06:30.011355 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0120 15:06:30.051923 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0120 15:06:30.231542 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0120 15:06:30.232918 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0120 15:06:30.511333 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0120 15:06:30.552399 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0120 15:06:30.731397 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0120 15:06:30.731994 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0120 15:06:31.010820 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0120 15:06:31.052300 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0120 15:06:31.232512 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0120 15:06:31.232915 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0120 15:06:31.511124 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0120 15:06:31.552129 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0120 15:06:31.731943 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0120 15:06:31.732929 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0120 15:06:32.010958 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0120 15:06:32.052413 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0120 15:06:32.232661 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0120 15:06:32.232713 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0120 15:06:32.512609 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0120 15:06:32.551853 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0120 15:06:32.731496 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0120 15:06:32.731943 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0120 15:06:33.012493 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0120 15:06:33.051613 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0120 15:06:33.230151 2137369 kapi.go:107] duration metric: took 40.003969564s to wait for kubernetes.io/minikube-addons=registry ...
I0120 15:06:33.231162 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0120 15:06:33.511111 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0120 15:06:33.552356 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0120 15:06:33.731499 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0120 15:06:34.013686 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0120 15:06:34.052825 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0120 15:06:34.231068 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0120 15:06:34.511033 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0120 15:06:34.552588 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0120 15:06:34.730166 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0120 15:06:35.031945 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0120 15:06:35.061605 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0120 15:06:35.234449 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0120 15:06:35.510502 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0120 15:06:35.559244 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0120 15:06:35.731057 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0120 15:06:36.010493 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0120 15:06:36.051808 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0120 15:06:36.232380 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0120 15:06:36.509698 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0120 15:06:36.552621 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0120 15:06:36.745681 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0120 15:06:37.010808 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0120 15:06:37.052525 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0120 15:06:37.230332 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0120 15:06:37.520327 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0120 15:06:37.551800 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0120 15:06:37.881777 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0120 15:06:38.009937 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0120 15:06:38.052520 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0120 15:06:38.230366 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0120 15:06:38.511231 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0120 15:06:38.551738 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0120 15:06:38.731132 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0120 15:06:39.010276 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0120 15:06:39.109985 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0120 15:06:39.231136 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0120 15:06:39.509972 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0120 15:06:39.552736 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0120 15:06:39.730944 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0120 15:06:40.011296 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0120 15:06:40.052529 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0120 15:06:40.231123 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0120 15:06:40.511777 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0120 15:06:40.551973 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0120 15:06:40.731032 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0120 15:06:41.010973 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0120 15:06:41.052526 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0120 15:06:41.231947 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0120 15:06:41.512073 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0120 15:06:41.552896 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0120 15:06:41.731178 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0120 15:06:42.010888 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0120 15:06:42.052437 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0120 15:06:42.231031 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0120 15:06:42.511849 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0120 15:06:42.552177 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0120 15:06:42.731349 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0120 15:06:43.010828 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0120 15:06:43.052516 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0120 15:06:43.230567 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0120 15:06:43.512691 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0120 15:06:43.552341 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0120 15:06:43.731593 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0120 15:06:44.249538 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0120 15:06:44.250135 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0120 15:06:44.250242 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0120 15:06:44.511995 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0120 15:06:44.553372 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0120 15:06:44.730853 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0120 15:06:45.011136 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0120 15:06:45.051417 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0120 15:06:45.230955 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0120 15:06:45.510980 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0120 15:06:45.553271 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0120 15:06:45.730966 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0120 15:06:46.011246 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0120 15:06:46.051775 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0120 15:06:46.230803 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0120 15:06:46.510401 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0120 15:06:46.552603 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0120 15:06:46.731699 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0120 15:06:47.011501 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0120 15:06:47.052513 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0120 15:06:47.232159 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0120 15:06:47.511022 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0120 15:06:47.553343 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0120 15:06:47.732640 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0120 15:06:48.013715 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0120 15:06:48.053087 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0120 15:06:48.232696 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0120 15:06:48.511319 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0120 15:06:48.555075 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0120 15:06:48.732744 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0120 15:06:49.023367 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0120 15:06:49.057856 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0120 15:06:49.230358 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0120 15:06:49.512138 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0120 15:06:49.552102 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0120 15:06:49.732022 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0120 15:06:50.011032 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0120 15:06:50.052346 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0120 15:06:50.232198 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0120 15:06:50.511402 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0120 15:06:50.551759 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0120 15:06:50.730597 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0120 15:06:51.010485 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0120 15:06:51.052574 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0120 15:06:51.231216 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0120 15:06:51.510010 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0120 15:06:51.552151 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0120 15:06:51.731248 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0120 15:06:52.009643 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0120 15:06:52.057120 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0120 15:06:52.231103 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0120 15:06:52.514636 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0120 15:06:52.553051 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0120 15:06:52.732413 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0120 15:06:53.010980 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0120 15:06:53.052832 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0120 15:06:53.642329 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0120 15:06:53.643036 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0120 15:06:53.650490 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0120 15:06:53.731286 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0120 15:06:54.013633 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0120 15:06:54.113275 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0120 15:06:54.231589 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0120 15:06:54.511224 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0120 15:06:54.552851 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0120 15:06:54.730371 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0120 15:06:55.010217 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0120 15:06:55.051884 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0120 15:06:55.231352 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0120 15:06:55.517291 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0120 15:06:55.616101 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0120 15:06:55.731154 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0120 15:06:56.010679 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0120 15:06:56.051666 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0120 15:06:56.231039 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0120 15:06:56.512038 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0120 15:06:56.554197 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0120 15:06:56.734633 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0120 15:06:57.011271 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0120 15:06:57.052479 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0120 15:06:57.229871 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0120 15:06:57.511654 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0120 15:06:57.551574 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0120 15:06:57.730415 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0120 15:06:58.010561 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0120 15:06:58.052189 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0120 15:06:58.231332 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0120 15:06:58.511608 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0120 15:06:58.554449 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0120 15:06:58.738948 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0120 15:06:59.011428 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0120 15:06:59.051900 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0120 15:06:59.240098 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0120 15:06:59.530091 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0120 15:06:59.559468 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0120 15:06:59.734879 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0120 15:07:00.010375 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0120 15:07:00.051846 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0120 15:07:00.231614 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0120 15:07:00.511807 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0120 15:07:00.552559 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0120 15:07:00.731087 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0120 15:07:01.010312 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0120 15:07:01.052009 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0120 15:07:01.230769 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0120 15:07:01.510328 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0120 15:07:01.552144 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0120 15:07:01.732106 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0120 15:07:02.010884 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0120 15:07:02.052084 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0120 15:07:02.230927 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0120 15:07:02.512458 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0120 15:07:02.552729 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0120 15:07:02.731487 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0120 15:07:03.010739 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0120 15:07:03.052270 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0120 15:07:03.230875 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0120 15:07:03.511574 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0120 15:07:03.553107 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0120 15:07:03.731603 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0120 15:07:04.193942 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0120 15:07:04.194389 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0120 15:07:04.231564 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0120 15:07:04.510775 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0120 15:07:04.551910 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0120 15:07:04.731010 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0120 15:07:05.010565 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0120 15:07:05.051766 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0120 15:07:05.231878 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0120 15:07:05.511402 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0120 15:07:05.552012 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0120 15:07:05.731266 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0120 15:07:06.010819 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0120 15:07:06.051758 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0120 15:07:06.230573 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0120 15:07:06.656273 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0120 15:07:06.657626 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0120 15:07:06.833411 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0120 15:07:07.010555 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0120 15:07:07.054659 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0120 15:07:07.239812 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0120 15:07:07.510886 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0120 15:07:07.551625 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0120 15:07:07.730512 2137369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0120 15:07:08.010801 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0120 15:07:08.052573 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0120 15:07:08.230496 2137369 kapi.go:107] duration metric: took 1m15.004285338s to wait for app.kubernetes.io/name=ingress-nginx ...
I0120 15:07:08.512826 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0120 15:07:08.552138 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0120 15:07:09.011987 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0120 15:07:09.051989 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0120 15:07:09.510790 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0120 15:07:09.552296 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0120 15:07:10.011148 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0120 15:07:10.052537 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0120 15:07:10.511355 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0120 15:07:10.551839 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0120 15:07:11.011519 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0120 15:07:11.110503 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0120 15:07:11.511730 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0120 15:07:11.611811 2137369 kapi.go:107] duration metric: took 1m14.063637565s to wait for kubernetes.io/minikube-addons=gcp-auth ...
I0120 15:07:11.613849 2137369 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-823768 cluster.
I0120 15:07:11.615491 2137369 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
I0120 15:07:11.616833 2137369 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
I0120 15:07:12.010475 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0120 15:07:12.511601 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0120 15:07:13.010504 2137369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0120 15:07:13.512426 2137369 kapi.go:107] duration metric: took 1m18.006867517s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
I0120 15:07:13.514225 2137369 out.go:177] * Enabled addons: amd-gpu-device-plugin, nvidia-device-plugin, default-storageclass, storage-provisioner-rancher, storage-provisioner, inspektor-gadget, ingress-dns, cloud-spanner, metrics-server, yakd, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
I0120 15:07:13.515460 2137369 addons.go:514] duration metric: took 1m28.574436568s for enable addons: enabled=[amd-gpu-device-plugin nvidia-device-plugin default-storageclass storage-provisioner-rancher storage-provisioner inspektor-gadget ingress-dns cloud-spanner metrics-server yakd volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
I0120 15:07:13.515500 2137369 start.go:246] waiting for cluster config update ...
I0120 15:07:13.515518 2137369 start.go:255] writing updated cluster config ...
I0120 15:07:13.515785 2137369 ssh_runner.go:195] Run: rm -f paused
I0120 15:07:13.569861 2137369 start.go:600] kubectl: 1.32.1, cluster: 1.32.0 (minor skew: 0)
I0120 15:07:13.571716 2137369 out.go:177] * Done! kubectl is now configured to use "addons-823768" cluster and "default" namespace by default
==> CRI-O <==
Jan 20 15:10:24 addons-823768 crio[664]: time="2025-01-20 15:10:24.021384300Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737385824021352767,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:566250,},InodesUsed:&UInt64Value{Value:196,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=68471ec6-58c5-4ceb-a509-e61733e9b6a5 name=/runtime.v1.ImageService/ImageFsInfo
Jan 20 15:10:24 addons-823768 crio[664]: time="2025-01-20 15:10:24.022208901Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b515c3d3-8dae-4694-b066-fcc42cbc0df6 name=/runtime.v1.RuntimeService/ListContainers
Jan 20 15:10:24 addons-823768 crio[664]: time="2025-01-20 15:10:24.022333801Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b515c3d3-8dae-4694-b066-fcc42cbc0df6 name=/runtime.v1.RuntimeService/ListContainers
Jan 20 15:10:24 addons-823768 crio[664]: time="2025-01-20 15:10:24.022809776Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0d939b4caf08eb54350b5aa23e89fc667bec3d6ad2e1cdf53ad059f20a45fcfa,PodSandboxId:3a519efe9b0389c6f8eab8ad880d51a93a046e8802df3afcdb8044fa4726d513,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:679a5fd058f6ca754a561846fe27927e408074431d63556e8fc588fc38be6901,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:93f9c72967dbcfaffe724ae5ba471e9568c9bbe67271f53266c84f3c83a409e3,State:CONTAINER_RUNNING,CreatedAt:1737385685922917165,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 66c3042c-5ca2-4e67-bbd5-02c9c84af6ea,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:56341dccb27e24a4a5fc98e5f55f32e43b4612d8c50e4725891d1411f5b8f8e0,PodSandboxId:0cb85977d13d85fd6a9201f80992e86304ff371324a520e0f6964e407b9a26f8,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1737385635840126325,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 606ebe90-54f5-4442-a16c-ee4d7c99146e,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:739724295d0f28b5aba399118f926eba1fd21e87d8ad182fa2f4b987a5d1d769,PodSandboxId:178d147355f56c1d3a8d94715162d0ee2da23d3b793f2f6345198e404ae245d0,Metadata:&ContainerMetadata{Name:csi-snapshotter,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:738351fd438f02c0fa796f623f5ec066f7431608d8c20524e0a109871454298c,State:CONTAINER_RUNNING,CreatedAt:1737385632064789675,Labels:map[string]string{io.kubernetes.container.name: csi-snapshotter,io.kubernetes.pod.name: csi-hostpathplugin-gnx78,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c749cfac-9a22-4577-9180-7c6720645ff1,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: 9a80f5e9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b4f42aa5415589c65e76a6c25c417473a659e2087b71b7188d9b5c8610924786,PodSandboxId:178d147355f56c1d3a8d94715162d0ee2da23d3b793f2f6345198e404ae245d0,Metadata:&ContainerMetadata{Name:csi-provisioner,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:931dbfd16f87c10b33e6aa2f32ac2d1beef37111d14c94af014c2c76f9326992,State:CONTAINER_RUNNING,CreatedAt:1737385628632095336,Labels:map[string]string{io.kubernetes.container.name: csi-provisioner,io.kubernetes.pod.name: csi-hostpathplugin-gnx78,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c749cfac-9a22-4577-9180-7c6720645ff1,},Annotations:ma
p[string]string{io.kubernetes.container.hash: 743e34f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0564a081b1cc34a7b8cdc6412684e708f96dd1433148cf64bb8e4f1c1ecf5a0f,PodSandboxId:330b0828d8f12713c4adf1aa231d5655019f6893d855c80a706bcf4b0624c449,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:62b61c42ec8dd877b85c0aa24c4744ce44d274bc16cc5d2364edfe67964ba55b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ee44bc2368033ba6147d84fb376356de1e40e4778c20dd8b4817bd1636121ddf,State:CONTAINER_RUNNING,CreatedAt:1737385627013062212,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-56d7c84fd4-g5ctf,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 0eb150dd-16b
4-418a-9533-ef0140d258d1,},Annotations:map[string]string{io.kubernetes.container.hash: 4e8eee94,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:a2e97ce48722b8576dddc211a69a1502d5ffbe952c0e7c8b1640fa134ba01138,PodSandboxId:178d147355f56c1d3a8d94715162d0ee2da23d3b793f2f6345198e404ae245d0,Metadata:&ContainerMetadata{Name:liveness-probe,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6,Annotations:map[s
tring]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e899260153aedc3a54e6b11ee23f11d96a01236ccd556fbd0372a49d07a7bdb8,State:CONTAINER_RUNNING,CreatedAt:1737385619750759519,Labels:map[string]string{io.kubernetes.container.name: liveness-probe,io.kubernetes.pod.name: csi-hostpathplugin-gnx78,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c749cfac-9a22-4577-9180-7c6720645ff1,},Annotations:map[string]string{io.kubernetes.container.hash: 62375f0d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:559997a706e3d21cc0f967c4e3c5b13fd7c8457c954ff770c450939594b31dc1,PodSandboxId:178d147355f56c1d3a8d94715162d0ee2da23d3b793f2f6345198e404ae245d0,Metadata:&ContainerMetadata{Name:hostpath,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14
c9b22ea11,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e255e073c508c2fe6cd5b51ba718297863d8ab7a2b57edfdd620eae7e26a2167,State:CONTAINER_RUNNING,CreatedAt:1737385618462366745,Labels:map[string]string{io.kubernetes.container.name: hostpath,io.kubernetes.pod.name: csi-hostpathplugin-gnx78,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c749cfac-9a22-4577-9180-7c6720645ff1,},Annotations:map[string]string{io.kubernetes.container.hash: 70cab6f4,io.kubernetes.container.ports: [{\"name\":\"healthz\",\"containerPort\":9898,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c46266f6f3f8ef9d11ba293271f8f0d629a4c0309d944b15f8f0173022057e8,PodSandboxId:178d147355f56c1d3a8d94715162d0ee2da23d3b793f2f6345198e404ae245d0,Metadata:&ContainerMetadata{Name:node-driver-registrar,Attemp
t:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88ef14a257f4247460be80e11f16d5ed7cc19e765df128c71515d8d7327e64c1,State:CONTAINER_RUNNING,CreatedAt:1737385616837910370,Labels:map[string]string{io.kubernetes.container.name: node-driver-registrar,io.kubernetes.pod.name: csi-hostpathplugin-gnx78,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c749cfac-9a22-4577-9180-7c6720645ff1,},Annotations:map[string]string{io.kubernetes.container.hash: 880c5a9e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ebd723b834f324dc6f5d79d01008ca239ce662616b93fb36af3dfe42ea592637,PodSandboxId:36fc01dd96c91549174979997e6231c4b2239861774f2dcbf2f80f3f3f2099ae,Met
adata:&ContainerMetadata{Name:csi-resizer,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:19a639eda60f037e40b0cb441c26585857fe2ca83d07b2a979e8188c04a6192c,State:CONTAINER_RUNNING,CreatedAt:1737385615269923592,Labels:map[string]string{io.kubernetes.container.name: csi-resizer,io.kubernetes.pod.name: csi-hostpath-resizer-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ff9ae680-66e0-4d97-a31f-401bc2303326,},Annotations:map[string]string{io.kubernetes.container.hash: 204ff79e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:407fa55d66c41fac709ee91bebabef965f238ca3b9b15be99afda14b4fecaf15,PodSandboxId:178d147355f56c1d3a8d94715162d0ee2da23d3b793f2f
6345198e404ae245d0,Metadata:&ContainerMetadata{Name:csi-external-health-monitor-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1ed5895ba6353a897f269c4919c8249f176ba9d8719a585dc6ed3cd861fe0a3,State:CONTAINER_RUNNING,CreatedAt:1737385613775578911,Labels:map[string]string{io.kubernetes.container.name: csi-external-health-monitor-controller,io.kubernetes.pod.name: csi-hostpathplugin-gnx78,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c749cfac-9a22-4577-9180-7c6720645ff1,},Annotations:map[string]string{io.kubernetes.container.hash: db43d78f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4ac2dabca6c916adc9e
dda32b0363ca47e902dd4b75f3229b8342a6778b1303a,PodSandboxId:5d17831bf379ab07c9a08a5b80a83738eff6ddaff3b1cfa7b94d170eb020a7ff,Metadata:&ContainerMetadata{Name:csi-attacher,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:59cbb42146a373fccdb496ee1d8f7de9213c9690266417fa7c1ea2c72b7173eb,State:CONTAINER_RUNNING,CreatedAt:1737385611708442291,Labels:map[string]string{io.kubernetes.container.name: csi-attacher,io.kubernetes.pod.name: csi-hostpath-attacher-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 116b9f15-1304-49fb-9076-931a2afbb254,},Annotations:map[string]string{io.kubernetes.container.hash: 3d14b655,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Contai
ner{Id:44f6d3f5ab703369a329fc3d6bbb2ddaa949226698872beffb23c7017017b6c8,PodSandboxId:fdc33fccc554ab0db7d44daaa5c8a4259323a1ab26e0ae25a537254a1859e03f,Metadata:&ContainerMetadata{Name:patch,Attempt:1,},Image:&ImageSpec{Image:a62eeff05ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a62eeff05ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,State:CONTAINER_EXITED,CreatedAt:1737385610982473464,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-xh2h7,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: c7031b3e-6221-45ae-a4a8-b6ce4e152d6c,},Annotations:map[string]string{io.kubernetes.container.hash: 3f610496,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b2fa6de7c4e6267
1eed38d0a63f1dcacd3f7627ac46c4790794375099519f16,PodSandboxId:94a7aef39e7030754c1de18634ec36c8c03b19467b4b700a8cc9451af46ece1b,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a62eeff05ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,State:CONTAINER_EXITED,CreatedAt:1737385610147618531,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-6vqcs,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: ead14696-5c7b-44d7-8555-1fb2df92f8bb,},Annotations:map[string]string{io.kubernetes.container.hash: fe18a2bf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod:
30,},},&Container{Id:c5d3228e30e2faa76b93fb5a26c445e0079fd71b1d87d9fb6378ba129240201f,PodSandboxId:f8f344c34f1e024c68fd6bb08d8af02d87a892a9514c56b5eba663758ed4de08,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1737385607714889079,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-68b874b76f-wz6d5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cacd7ffe-a681-4acf-96f8-18ef261221a0,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.containe
r.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:36030becf6c98f72f5864b3e74473ee86f0e2eb64d69c8575baa7b897406fa39,PodSandboxId:aef77a63ec4ad09e8ba027cac4fa617d462a2525b0845c7772e01b9f5ef8b326,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1737385593848855247,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-68b874b76f-v9qfd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f5c996f-6eab-461e-ab1b-cd3349dd28b6,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f0ab0f9b0c6189698167001b6d245418b6802b84e5afe9f7c0e74dcbda65715f,PodSandboxId:718d4440e9db0be4cfbc40ba14155123e2844de74b15896f6bb022efc435031a,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:30dd67412fdea30479de8d5d9bf760870308d24d911c59ea1f1757f04c33cc29,State:CONTAINER_RUNNING,CreatedAt:1737385582447810606,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c004e6ed-e3c7-41fb-81db-143b10c8e7be,},Annotations:map[string]string{io.kubernetes.container.hash: 8778d474,io.kubernetes
.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97181205fe8bfde879ac7f4a51d4017aa7e53d2f40e7b72512e7e9aa2e3a1e73,PodSandboxId:ee23f086b04ce2e73d1bb935d3be0450f91dad52c7c41e3c2b06b09c6c6eda1f,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1737385571454606615,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-hd9wh,io.kubernetes.pod.namespace: kube-system,io.kubernet
es.pod.uid: 74d848dc-f26d-43fe-8a5a-a0df1659422e,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6eeabacb6e6ea1dc7ad4d246f72a67dffb217dd988427a4567055a6557c856b6,PodSandboxId:fca109f861e411be5babd275ed95b352404e992b5dab1a90ffe48a6b88e0a2e5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1737385553514978161,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
0e778f21-8d84-4dd3-a4d5-1d838a0c732a,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ad760d35635f6485572a0b6fa40042d426af15a7e58fe0ef324a32a6f6b2d71,PodSandboxId:136274a1e784b3609fa9ac2f30c16c605d79aaaec351df86a1af6d39917410fc,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1737385548832822328,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-5vcsv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 07cf3526-d1a7-45e9-a4b0-843c4c5
d8087,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a16679188eadce24a4479b62adc59f3b5c88a37585c863efbe419451befd1b66,PodSandboxId:3eb31a3186fcb8345cd502441425a9d283a8a1d871e01a73b1c6ab5ec24fcb1f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,State:CONTAINER_RUNNING,CreatedAt:1737385546514112240,Labels:ma
p[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7rvmm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ad2f5c6d-b93f-4390-876b-33132993d790,},Annotations:map[string]string{io.kubernetes.container.hash: 8f247ea6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e3453aa93d27fd339c4cbb350ff3ea39c5648c43373ff8d85ab0e791b5d5115,PodSandboxId:e3f2609c351e31ebb917c217ea604edfd1ed1153edd840da63d83816e8131c06,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,State:CONTAINER_RUNNING,CreatedAt:1737385534874575585,Labels:map[string]string{io.kubernetes
.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-823768,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fa89e65c0dd5eb66f20c370d80247ae1,},Annotations:map[string]string{io.kubernetes.container.hash: 8c4b12d6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:910f65c08fb23854cbdcc82ddb7f710ce49ea972c848f0c95dc0dd199743d1d6,PodSandboxId:717d4b555e17cb12eabcd9fd5f367ce049f51431cf06e13506fa2f01920eed0e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,State:CONTAINER_RUNNING,CreatedAt:1737385534860117722,Labels:map[string]string{io.kubernetes.contain
er.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-823768,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: af2eb10cf3914b699799b36cacd58b0d,},Annotations:map[string]string{io.kubernetes.container.hash: 99f3a73e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e011bb870926a0e81acf676ccea2f9b4bb99849944df7c7876e408162ada7e9,PodSandboxId:11457cc606696b4f6ca88c342b55bfa4fa55e299628c619bcfe5354d81f4da77,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1737385534904766615,Labels:map[string]string{io.kubernetes.container.name:
etcd,io.kubernetes.pod.name: etcd-addons-823768,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 275be48abf35fb88c2ee76ac3fc80e7b,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e3f3a7d8000f3b8c196c9737b85b63fd0ded933be19720accb71ecafa96a061,PodSandboxId:04698fdd92bec31438bf337e8da0528ab4059b78c66cb3d0c7b96990e8fe8c0d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,State:CONTAINER_RUNNING,CreatedAt:1737385534885758903,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name
: kube-apiserver-addons-823768,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8efe2c5495b6ef47020c6e3bc5a82719,},Annotations:map[string]string{io.kubernetes.container.hash: bf915d6a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b515c3d3-8dae-4694-b066-fcc42cbc0df6 name=/runtime.v1.RuntimeService/ListContainers
Jan 20 15:10:24 addons-823768 crio[664]: time="2025-01-20 15:10:24.070361299Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=f89b4d3e-c2b4-498f-bd23-1e72bb1af901 name=/runtime.v1.RuntimeService/Version
Jan 20 15:10:24 addons-823768 crio[664]: time="2025-01-20 15:10:24.070473415Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=f89b4d3e-c2b4-498f-bd23-1e72bb1af901 name=/runtime.v1.RuntimeService/Version
Jan 20 15:10:24 addons-823768 crio[664]: time="2025-01-20 15:10:24.071884946Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=4e49afd5-c3be-402c-8e8f-b899c4a78498 name=/runtime.v1.ImageService/ImageFsInfo
Jan 20 15:10:24 addons-823768 crio[664]: time="2025-01-20 15:10:24.073524503Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737385824073493807,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:566250,},InodesUsed:&UInt64Value{Value:196,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=4e49afd5-c3be-402c-8e8f-b899c4a78498 name=/runtime.v1.ImageService/ImageFsInfo
Jan 20 15:10:24 addons-823768 crio[664]: time="2025-01-20 15:10:24.074113674Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c4a2455c-638b-44bc-b043-3c389b19cb91 name=/runtime.v1.RuntimeService/ListContainers
Jan 20 15:10:24 addons-823768 crio[664]: time="2025-01-20 15:10:24.074188775Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c4a2455c-638b-44bc-b043-3c389b19cb91 name=/runtime.v1.RuntimeService/ListContainers
Jan 20 15:10:24 addons-823768 crio[664]: time="2025-01-20 15:10:24.075007873Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0d939b4caf08eb54350b5aa23e89fc667bec3d6ad2e1cdf53ad059f20a45fcfa,PodSandboxId:3a519efe9b0389c6f8eab8ad880d51a93a046e8802df3afcdb8044fa4726d513,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:679a5fd058f6ca754a561846fe27927e408074431d63556e8fc588fc38be6901,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:93f9c72967dbcfaffe724ae5ba471e9568c9bbe67271f53266c84f3c83a409e3,State:CONTAINER_RUNNING,CreatedAt:1737385685922917165,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 66c3042c-5ca2-4e67-bbd5-02c9c84af6ea,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:56341dccb27e24a4a5fc98e5f55f32e43b4612d8c50e4725891d1411f5b8f8e0,PodSandboxId:0cb85977d13d85fd6a9201f80992e86304ff371324a520e0f6964e407b9a26f8,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1737385635840126325,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 606ebe90-54f5-4442-a16c-ee4d7c99146e,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:739724295d0f28b5aba399118f926eba1fd21e87d8ad182fa2f4b987a5d1d769,PodSandboxId:178d147355f56c1d3a8d94715162d0ee2da23d3b793f2f6345198e404ae245d0,Metadata:&ContainerMetadata{Name:csi-snapshotter,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:738351fd438f02c0fa796f623f5ec066f7431608d8c20524e0a109871454298c,State:CONTAINER_RUNNING,CreatedAt:1737385632064789675,Labels:map[string]string{io.kubernetes.container.name: csi-snapshotter,io.kubernetes.pod.name: csi-hostpathplugin-gnx78,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c749cfac-9a22-4577-9180-7c6720645ff1,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: 9a80f5e9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b4f42aa5415589c65e76a6c25c417473a659e2087b71b7188d9b5c8610924786,PodSandboxId:178d147355f56c1d3a8d94715162d0ee2da23d3b793f2f6345198e404ae245d0,Metadata:&ContainerMetadata{Name:csi-provisioner,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:931dbfd16f87c10b33e6aa2f32ac2d1beef37111d14c94af014c2c76f9326992,State:CONTAINER_RUNNING,CreatedAt:1737385628632095336,Labels:map[string]string{io.kubernetes.container.name: csi-provisioner,io.kubernetes.pod.name: csi-hostpathplugin-gnx78,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c749cfac-9a22-4577-9180-7c6720645ff1,},Annotations:ma
p[string]string{io.kubernetes.container.hash: 743e34f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0564a081b1cc34a7b8cdc6412684e708f96dd1433148cf64bb8e4f1c1ecf5a0f,PodSandboxId:330b0828d8f12713c4adf1aa231d5655019f6893d855c80a706bcf4b0624c449,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:62b61c42ec8dd877b85c0aa24c4744ce44d274bc16cc5d2364edfe67964ba55b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ee44bc2368033ba6147d84fb376356de1e40e4778c20dd8b4817bd1636121ddf,State:CONTAINER_RUNNING,CreatedAt:1737385627013062212,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-56d7c84fd4-g5ctf,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 0eb150dd-16b
4-418a-9533-ef0140d258d1,},Annotations:map[string]string{io.kubernetes.container.hash: 4e8eee94,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:a2e97ce48722b8576dddc211a69a1502d5ffbe952c0e7c8b1640fa134ba01138,PodSandboxId:178d147355f56c1d3a8d94715162d0ee2da23d3b793f2f6345198e404ae245d0,Metadata:&ContainerMetadata{Name:liveness-probe,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6,Annotations:map[s
tring]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e899260153aedc3a54e6b11ee23f11d96a01236ccd556fbd0372a49d07a7bdb8,State:CONTAINER_RUNNING,CreatedAt:1737385619750759519,Labels:map[string]string{io.kubernetes.container.name: liveness-probe,io.kubernetes.pod.name: csi-hostpathplugin-gnx78,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c749cfac-9a22-4577-9180-7c6720645ff1,},Annotations:map[string]string{io.kubernetes.container.hash: 62375f0d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:559997a706e3d21cc0f967c4e3c5b13fd7c8457c954ff770c450939594b31dc1,PodSandboxId:178d147355f56c1d3a8d94715162d0ee2da23d3b793f2f6345198e404ae245d0,Metadata:&ContainerMetadata{Name:hostpath,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14
c9b22ea11,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e255e073c508c2fe6cd5b51ba718297863d8ab7a2b57edfdd620eae7e26a2167,State:CONTAINER_RUNNING,CreatedAt:1737385618462366745,Labels:map[string]string{io.kubernetes.container.name: hostpath,io.kubernetes.pod.name: csi-hostpathplugin-gnx78,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c749cfac-9a22-4577-9180-7c6720645ff1,},Annotations:map[string]string{io.kubernetes.container.hash: 70cab6f4,io.kubernetes.container.ports: [{\"name\":\"healthz\",\"containerPort\":9898,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c46266f6f3f8ef9d11ba293271f8f0d629a4c0309d944b15f8f0173022057e8,PodSandboxId:178d147355f56c1d3a8d94715162d0ee2da23d3b793f2f6345198e404ae245d0,Metadata:&ContainerMetadata{Name:node-driver-registrar,Attemp
t:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88ef14a257f4247460be80e11f16d5ed7cc19e765df128c71515d8d7327e64c1,State:CONTAINER_RUNNING,CreatedAt:1737385616837910370,Labels:map[string]string{io.kubernetes.container.name: node-driver-registrar,io.kubernetes.pod.name: csi-hostpathplugin-gnx78,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c749cfac-9a22-4577-9180-7c6720645ff1,},Annotations:map[string]string{io.kubernetes.container.hash: 880c5a9e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ebd723b834f324dc6f5d79d01008ca239ce662616b93fb36af3dfe42ea592637,PodSandboxId:36fc01dd96c91549174979997e6231c4b2239861774f2dcbf2f80f3f3f2099ae,Met
adata:&ContainerMetadata{Name:csi-resizer,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:19a639eda60f037e40b0cb441c26585857fe2ca83d07b2a979e8188c04a6192c,State:CONTAINER_RUNNING,CreatedAt:1737385615269923592,Labels:map[string]string{io.kubernetes.container.name: csi-resizer,io.kubernetes.pod.name: csi-hostpath-resizer-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ff9ae680-66e0-4d97-a31f-401bc2303326,},Annotations:map[string]string{io.kubernetes.container.hash: 204ff79e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:407fa55d66c41fac709ee91bebabef965f238ca3b9b15be99afda14b4fecaf15,PodSandboxId:178d147355f56c1d3a8d94715162d0ee2da23d3b793f2f
6345198e404ae245d0,Metadata:&ContainerMetadata{Name:csi-external-health-monitor-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1ed5895ba6353a897f269c4919c8249f176ba9d8719a585dc6ed3cd861fe0a3,State:CONTAINER_RUNNING,CreatedAt:1737385613775578911,Labels:map[string]string{io.kubernetes.container.name: csi-external-health-monitor-controller,io.kubernetes.pod.name: csi-hostpathplugin-gnx78,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c749cfac-9a22-4577-9180-7c6720645ff1,},Annotations:map[string]string{io.kubernetes.container.hash: db43d78f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4ac2dabca6c916adc9e
dda32b0363ca47e902dd4b75f3229b8342a6778b1303a,PodSandboxId:5d17831bf379ab07c9a08a5b80a83738eff6ddaff3b1cfa7b94d170eb020a7ff,Metadata:&ContainerMetadata{Name:csi-attacher,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:59cbb42146a373fccdb496ee1d8f7de9213c9690266417fa7c1ea2c72b7173eb,State:CONTAINER_RUNNING,CreatedAt:1737385611708442291,Labels:map[string]string{io.kubernetes.container.name: csi-attacher,io.kubernetes.pod.name: csi-hostpath-attacher-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 116b9f15-1304-49fb-9076-931a2afbb254,},Annotations:map[string]string{io.kubernetes.container.hash: 3d14b655,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Contai
ner{Id:44f6d3f5ab703369a329fc3d6bbb2ddaa949226698872beffb23c7017017b6c8,PodSandboxId:fdc33fccc554ab0db7d44daaa5c8a4259323a1ab26e0ae25a537254a1859e03f,Metadata:&ContainerMetadata{Name:patch,Attempt:1,},Image:&ImageSpec{Image:a62eeff05ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a62eeff05ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,State:CONTAINER_EXITED,CreatedAt:1737385610982473464,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-xh2h7,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: c7031b3e-6221-45ae-a4a8-b6ce4e152d6c,},Annotations:map[string]string{io.kubernetes.container.hash: 3f610496,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b2fa6de7c4e6267
1eed38d0a63f1dcacd3f7627ac46c4790794375099519f16,PodSandboxId:94a7aef39e7030754c1de18634ec36c8c03b19467b4b700a8cc9451af46ece1b,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a62eeff05ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,State:CONTAINER_EXITED,CreatedAt:1737385610147618531,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-6vqcs,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: ead14696-5c7b-44d7-8555-1fb2df92f8bb,},Annotations:map[string]string{io.kubernetes.container.hash: fe18a2bf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod:
30,},},&Container{Id:c5d3228e30e2faa76b93fb5a26c445e0079fd71b1d87d9fb6378ba129240201f,PodSandboxId:f8f344c34f1e024c68fd6bb08d8af02d87a892a9514c56b5eba663758ed4de08,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1737385607714889079,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-68b874b76f-wz6d5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cacd7ffe-a681-4acf-96f8-18ef261221a0,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.containe
r.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:36030becf6c98f72f5864b3e74473ee86f0e2eb64d69c8575baa7b897406fa39,PodSandboxId:aef77a63ec4ad09e8ba027cac4fa617d462a2525b0845c7772e01b9f5ef8b326,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1737385593848855247,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-68b874b76f-v9qfd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f5c996f-6eab-461e-ab1b-cd3349dd28b6,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f0ab0f9b0c6189698167001b6d245418b6802b84e5afe9f7c0e74dcbda65715f,PodSandboxId:718d4440e9db0be4cfbc40ba14155123e2844de74b15896f6bb022efc435031a,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:30dd67412fdea30479de8d5d9bf760870308d24d911c59ea1f1757f04c33cc29,State:CONTAINER_RUNNING,CreatedAt:1737385582447810606,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c004e6ed-e3c7-41fb-81db-143b10c8e7be,},Annotations:map[string]string{io.kubernetes.container.hash: 8778d474,io.kubernetes
.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97181205fe8bfde879ac7f4a51d4017aa7e53d2f40e7b72512e7e9aa2e3a1e73,PodSandboxId:ee23f086b04ce2e73d1bb935d3be0450f91dad52c7c41e3c2b06b09c6c6eda1f,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1737385571454606615,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-hd9wh,io.kubernetes.pod.namespace: kube-system,io.kubernet
es.pod.uid: 74d848dc-f26d-43fe-8a5a-a0df1659422e,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6eeabacb6e6ea1dc7ad4d246f72a67dffb217dd988427a4567055a6557c856b6,PodSandboxId:fca109f861e411be5babd275ed95b352404e992b5dab1a90ffe48a6b88e0a2e5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1737385553514978161,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
0e778f21-8d84-4dd3-a4d5-1d838a0c732a,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ad760d35635f6485572a0b6fa40042d426af15a7e58fe0ef324a32a6f6b2d71,PodSandboxId:136274a1e784b3609fa9ac2f30c16c605d79aaaec351df86a1af6d39917410fc,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1737385548832822328,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-5vcsv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 07cf3526-d1a7-45e9-a4b0-843c4c5
d8087,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a16679188eadce24a4479b62adc59f3b5c88a37585c863efbe419451befd1b66,PodSandboxId:3eb31a3186fcb8345cd502441425a9d283a8a1d871e01a73b1c6ab5ec24fcb1f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,State:CONTAINER_RUNNING,CreatedAt:1737385546514112240,Labels:ma
p[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7rvmm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ad2f5c6d-b93f-4390-876b-33132993d790,},Annotations:map[string]string{io.kubernetes.container.hash: 8f247ea6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e3453aa93d27fd339c4cbb350ff3ea39c5648c43373ff8d85ab0e791b5d5115,PodSandboxId:e3f2609c351e31ebb917c217ea604edfd1ed1153edd840da63d83816e8131c06,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,State:CONTAINER_RUNNING,CreatedAt:1737385534874575585,Labels:map[string]string{io.kubernetes
.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-823768,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fa89e65c0dd5eb66f20c370d80247ae1,},Annotations:map[string]string{io.kubernetes.container.hash: 8c4b12d6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:910f65c08fb23854cbdcc82ddb7f710ce49ea972c848f0c95dc0dd199743d1d6,PodSandboxId:717d4b555e17cb12eabcd9fd5f367ce049f51431cf06e13506fa2f01920eed0e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,State:CONTAINER_RUNNING,CreatedAt:1737385534860117722,Labels:map[string]string{io.kubernetes.contain
er.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-823768,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: af2eb10cf3914b699799b36cacd58b0d,},Annotations:map[string]string{io.kubernetes.container.hash: 99f3a73e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e011bb870926a0e81acf676ccea2f9b4bb99849944df7c7876e408162ada7e9,PodSandboxId:11457cc606696b4f6ca88c342b55bfa4fa55e299628c619bcfe5354d81f4da77,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1737385534904766615,Labels:map[string]string{io.kubernetes.container.name:
etcd,io.kubernetes.pod.name: etcd-addons-823768,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 275be48abf35fb88c2ee76ac3fc80e7b,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e3f3a7d8000f3b8c196c9737b85b63fd0ded933be19720accb71ecafa96a061,PodSandboxId:04698fdd92bec31438bf337e8da0528ab4059b78c66cb3d0c7b96990e8fe8c0d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,State:CONTAINER_RUNNING,CreatedAt:1737385534885758903,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name
: kube-apiserver-addons-823768,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8efe2c5495b6ef47020c6e3bc5a82719,},Annotations:map[string]string{io.kubernetes.container.hash: bf915d6a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c4a2455c-638b-44bc-b043-3c389b19cb91 name=/runtime.v1.RuntimeService/ListContainers
Jan 20 15:10:24 addons-823768 crio[664]: time="2025-01-20 15:10:24.115280940Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=e34c247e-8cec-4041-a527-61aa10dbc7b2 name=/runtime.v1.RuntimeService/Version
Jan 20 15:10:24 addons-823768 crio[664]: time="2025-01-20 15:10:24.115375109Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e34c247e-8cec-4041-a527-61aa10dbc7b2 name=/runtime.v1.RuntimeService/Version
Jan 20 15:10:24 addons-823768 crio[664]: time="2025-01-20 15:10:24.116559888Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=c054c405-873e-42b2-82d5-89abb40f8421 name=/runtime.v1.ImageService/ImageFsInfo
Jan 20 15:10:24 addons-823768 crio[664]: time="2025-01-20 15:10:24.117945629Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737385824117912949,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:566250,},InodesUsed:&UInt64Value{Value:196,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c054c405-873e-42b2-82d5-89abb40f8421 name=/runtime.v1.ImageService/ImageFsInfo
Jan 20 15:10:24 addons-823768 crio[664]: time="2025-01-20 15:10:24.118900994Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=31501b0e-a60f-46f0-82a0-8d738c20a088 name=/runtime.v1.RuntimeService/ListContainers
Jan 20 15:10:24 addons-823768 crio[664]: time="2025-01-20 15:10:24.118964950Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=31501b0e-a60f-46f0-82a0-8d738c20a088 name=/runtime.v1.RuntimeService/ListContainers
Jan 20 15:10:24 addons-823768 crio[664]: time="2025-01-20 15:10:24.119547331Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0d939b4caf08eb54350b5aa23e89fc667bec3d6ad2e1cdf53ad059f20a45fcfa,PodSandboxId:3a519efe9b0389c6f8eab8ad880d51a93a046e8802df3afcdb8044fa4726d513,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:679a5fd058f6ca754a561846fe27927e408074431d63556e8fc588fc38be6901,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:93f9c72967dbcfaffe724ae5ba471e9568c9bbe67271f53266c84f3c83a409e3,State:CONTAINER_RUNNING,CreatedAt:1737385685922917165,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 66c3042c-5ca2-4e67-bbd5-02c9c84af6ea,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:56341dccb27e24a4a5fc98e5f55f32e43b4612d8c50e4725891d1411f5b8f8e0,PodSandboxId:0cb85977d13d85fd6a9201f80992e86304ff371324a520e0f6964e407b9a26f8,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1737385635840126325,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 606ebe90-54f5-4442-a16c-ee4d7c99146e,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:739724295d0f28b5aba399118f926eba1fd21e87d8ad182fa2f4b987a5d1d769,PodSandboxId:178d147355f56c1d3a8d94715162d0ee2da23d3b793f2f6345198e404ae245d0,Metadata:&ContainerMetadata{Name:csi-snapshotter,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:738351fd438f02c0fa796f623f5ec066f7431608d8c20524e0a109871454298c,State:CONTAINER_RUNNING,CreatedAt:1737385632064789675,Labels:map[string]string{io.kubernetes.container.name: csi-snapshotter,io.kubernetes.pod.name: csi-hostpathplugin-gnx78,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c749cfac-9a22-4577-9180-7c6720645ff1,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: 9a80f5e9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b4f42aa5415589c65e76a6c25c417473a659e2087b71b7188d9b5c8610924786,PodSandboxId:178d147355f56c1d3a8d94715162d0ee2da23d3b793f2f6345198e404ae245d0,Metadata:&ContainerMetadata{Name:csi-provisioner,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:931dbfd16f87c10b33e6aa2f32ac2d1beef37111d14c94af014c2c76f9326992,State:CONTAINER_RUNNING,CreatedAt:1737385628632095336,Labels:map[string]string{io.kubernetes.container.name: csi-provisioner,io.kubernetes.pod.name: csi-hostpathplugin-gnx78,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c749cfac-9a22-4577-9180-7c6720645ff1,},Annotations:ma
p[string]string{io.kubernetes.container.hash: 743e34f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0564a081b1cc34a7b8cdc6412684e708f96dd1433148cf64bb8e4f1c1ecf5a0f,PodSandboxId:330b0828d8f12713c4adf1aa231d5655019f6893d855c80a706bcf4b0624c449,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:62b61c42ec8dd877b85c0aa24c4744ce44d274bc16cc5d2364edfe67964ba55b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ee44bc2368033ba6147d84fb376356de1e40e4778c20dd8b4817bd1636121ddf,State:CONTAINER_RUNNING,CreatedAt:1737385627013062212,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-56d7c84fd4-g5ctf,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 0eb150dd-16b
4-418a-9533-ef0140d258d1,},Annotations:map[string]string{io.kubernetes.container.hash: 4e8eee94,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:a2e97ce48722b8576dddc211a69a1502d5ffbe952c0e7c8b1640fa134ba01138,PodSandboxId:178d147355f56c1d3a8d94715162d0ee2da23d3b793f2f6345198e404ae245d0,Metadata:&ContainerMetadata{Name:liveness-probe,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6,Annotations:map[s
tring]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e899260153aedc3a54e6b11ee23f11d96a01236ccd556fbd0372a49d07a7bdb8,State:CONTAINER_RUNNING,CreatedAt:1737385619750759519,Labels:map[string]string{io.kubernetes.container.name: liveness-probe,io.kubernetes.pod.name: csi-hostpathplugin-gnx78,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c749cfac-9a22-4577-9180-7c6720645ff1,},Annotations:map[string]string{io.kubernetes.container.hash: 62375f0d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:559997a706e3d21cc0f967c4e3c5b13fd7c8457c954ff770c450939594b31dc1,PodSandboxId:178d147355f56c1d3a8d94715162d0ee2da23d3b793f2f6345198e404ae245d0,Metadata:&ContainerMetadata{Name:hostpath,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14
c9b22ea11,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e255e073c508c2fe6cd5b51ba718297863d8ab7a2b57edfdd620eae7e26a2167,State:CONTAINER_RUNNING,CreatedAt:1737385618462366745,Labels:map[string]string{io.kubernetes.container.name: hostpath,io.kubernetes.pod.name: csi-hostpathplugin-gnx78,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c749cfac-9a22-4577-9180-7c6720645ff1,},Annotations:map[string]string{io.kubernetes.container.hash: 70cab6f4,io.kubernetes.container.ports: [{\"name\":\"healthz\",\"containerPort\":9898,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c46266f6f3f8ef9d11ba293271f8f0d629a4c0309d944b15f8f0173022057e8,PodSandboxId:178d147355f56c1d3a8d94715162d0ee2da23d3b793f2f6345198e404ae245d0,Metadata:&ContainerMetadata{Name:node-driver-registrar,Attemp
t:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88ef14a257f4247460be80e11f16d5ed7cc19e765df128c71515d8d7327e64c1,State:CONTAINER_RUNNING,CreatedAt:1737385616837910370,Labels:map[string]string{io.kubernetes.container.name: node-driver-registrar,io.kubernetes.pod.name: csi-hostpathplugin-gnx78,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c749cfac-9a22-4577-9180-7c6720645ff1,},Annotations:map[string]string{io.kubernetes.container.hash: 880c5a9e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ebd723b834f324dc6f5d79d01008ca239ce662616b93fb36af3dfe42ea592637,PodSandboxId:36fc01dd96c91549174979997e6231c4b2239861774f2dcbf2f80f3f3f2099ae,Met
adata:&ContainerMetadata{Name:csi-resizer,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:19a639eda60f037e40b0cb441c26585857fe2ca83d07b2a979e8188c04a6192c,State:CONTAINER_RUNNING,CreatedAt:1737385615269923592,Labels:map[string]string{io.kubernetes.container.name: csi-resizer,io.kubernetes.pod.name: csi-hostpath-resizer-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ff9ae680-66e0-4d97-a31f-401bc2303326,},Annotations:map[string]string{io.kubernetes.container.hash: 204ff79e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:407fa55d66c41fac709ee91bebabef965f238ca3b9b15be99afda14b4fecaf15,PodSandboxId:178d147355f56c1d3a8d94715162d0ee2da23d3b793f2f
6345198e404ae245d0,Metadata:&ContainerMetadata{Name:csi-external-health-monitor-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1ed5895ba6353a897f269c4919c8249f176ba9d8719a585dc6ed3cd861fe0a3,State:CONTAINER_RUNNING,CreatedAt:1737385613775578911,Labels:map[string]string{io.kubernetes.container.name: csi-external-health-monitor-controller,io.kubernetes.pod.name: csi-hostpathplugin-gnx78,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c749cfac-9a22-4577-9180-7c6720645ff1,},Annotations:map[string]string{io.kubernetes.container.hash: db43d78f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4ac2dabca6c916adc9e
dda32b0363ca47e902dd4b75f3229b8342a6778b1303a,PodSandboxId:5d17831bf379ab07c9a08a5b80a83738eff6ddaff3b1cfa7b94d170eb020a7ff,Metadata:&ContainerMetadata{Name:csi-attacher,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:59cbb42146a373fccdb496ee1d8f7de9213c9690266417fa7c1ea2c72b7173eb,State:CONTAINER_RUNNING,CreatedAt:1737385611708442291,Labels:map[string]string{io.kubernetes.container.name: csi-attacher,io.kubernetes.pod.name: csi-hostpath-attacher-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 116b9f15-1304-49fb-9076-931a2afbb254,},Annotations:map[string]string{io.kubernetes.container.hash: 3d14b655,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Contai
ner{Id:44f6d3f5ab703369a329fc3d6bbb2ddaa949226698872beffb23c7017017b6c8,PodSandboxId:fdc33fccc554ab0db7d44daaa5c8a4259323a1ab26e0ae25a537254a1859e03f,Metadata:&ContainerMetadata{Name:patch,Attempt:1,},Image:&ImageSpec{Image:a62eeff05ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a62eeff05ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,State:CONTAINER_EXITED,CreatedAt:1737385610982473464,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-xh2h7,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: c7031b3e-6221-45ae-a4a8-b6ce4e152d6c,},Annotations:map[string]string{io.kubernetes.container.hash: 3f610496,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b2fa6de7c4e6267
1eed38d0a63f1dcacd3f7627ac46c4790794375099519f16,PodSandboxId:94a7aef39e7030754c1de18634ec36c8c03b19467b4b700a8cc9451af46ece1b,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a62eeff05ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,State:CONTAINER_EXITED,CreatedAt:1737385610147618531,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-6vqcs,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: ead14696-5c7b-44d7-8555-1fb2df92f8bb,},Annotations:map[string]string{io.kubernetes.container.hash: fe18a2bf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod:
30,},},&Container{Id:c5d3228e30e2faa76b93fb5a26c445e0079fd71b1d87d9fb6378ba129240201f,PodSandboxId:f8f344c34f1e024c68fd6bb08d8af02d87a892a9514c56b5eba663758ed4de08,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1737385607714889079,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-68b874b76f-wz6d5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cacd7ffe-a681-4acf-96f8-18ef261221a0,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.containe
r.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:36030becf6c98f72f5864b3e74473ee86f0e2eb64d69c8575baa7b897406fa39,PodSandboxId:aef77a63ec4ad09e8ba027cac4fa617d462a2525b0845c7772e01b9f5ef8b326,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1737385593848855247,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-68b874b76f-v9qfd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f5c996f-6eab-461e-ab1b-cd3349dd28b6,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f0ab0f9b0c6189698167001b6d245418b6802b84e5afe9f7c0e74dcbda65715f,PodSandboxId:718d4440e9db0be4cfbc40ba14155123e2844de74b15896f6bb022efc435031a,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:30dd67412fdea30479de8d5d9bf760870308d24d911c59ea1f1757f04c33cc29,State:CONTAINER_RUNNING,CreatedAt:1737385582447810606,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c004e6ed-e3c7-41fb-81db-143b10c8e7be,},Annotations:map[string]string{io.kubernetes.container.hash: 8778d474,io.kubernetes
.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97181205fe8bfde879ac7f4a51d4017aa7e53d2f40e7b72512e7e9aa2e3a1e73,PodSandboxId:ee23f086b04ce2e73d1bb935d3be0450f91dad52c7c41e3c2b06b09c6c6eda1f,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1737385571454606615,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-hd9wh,io.kubernetes.pod.namespace: kube-system,io.kubernet
es.pod.uid: 74d848dc-f26d-43fe-8a5a-a0df1659422e,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6eeabacb6e6ea1dc7ad4d246f72a67dffb217dd988427a4567055a6557c856b6,PodSandboxId:fca109f861e411be5babd275ed95b352404e992b5dab1a90ffe48a6b88e0a2e5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1737385553514978161,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
0e778f21-8d84-4dd3-a4d5-1d838a0c732a,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ad760d35635f6485572a0b6fa40042d426af15a7e58fe0ef324a32a6f6b2d71,PodSandboxId:136274a1e784b3609fa9ac2f30c16c605d79aaaec351df86a1af6d39917410fc,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1737385548832822328,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-5vcsv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 07cf3526-d1a7-45e9-a4b0-843c4c5
d8087,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a16679188eadce24a4479b62adc59f3b5c88a37585c863efbe419451befd1b66,PodSandboxId:3eb31a3186fcb8345cd502441425a9d283a8a1d871e01a73b1c6ab5ec24fcb1f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,State:CONTAINER_RUNNING,CreatedAt:1737385546514112240,Labels:ma
p[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7rvmm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ad2f5c6d-b93f-4390-876b-33132993d790,},Annotations:map[string]string{io.kubernetes.container.hash: 8f247ea6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e3453aa93d27fd339c4cbb350ff3ea39c5648c43373ff8d85ab0e791b5d5115,PodSandboxId:e3f2609c351e31ebb917c217ea604edfd1ed1153edd840da63d83816e8131c06,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,State:CONTAINER_RUNNING,CreatedAt:1737385534874575585,Labels:map[string]string{io.kubernetes
.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-823768,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fa89e65c0dd5eb66f20c370d80247ae1,},Annotations:map[string]string{io.kubernetes.container.hash: 8c4b12d6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:910f65c08fb23854cbdcc82ddb7f710ce49ea972c848f0c95dc0dd199743d1d6,PodSandboxId:717d4b555e17cb12eabcd9fd5f367ce049f51431cf06e13506fa2f01920eed0e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,State:CONTAINER_RUNNING,CreatedAt:1737385534860117722,Labels:map[string]string{io.kubernetes.contain
er.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-823768,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: af2eb10cf3914b699799b36cacd58b0d,},Annotations:map[string]string{io.kubernetes.container.hash: 99f3a73e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e011bb870926a0e81acf676ccea2f9b4bb99849944df7c7876e408162ada7e9,PodSandboxId:11457cc606696b4f6ca88c342b55bfa4fa55e299628c619bcfe5354d81f4da77,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1737385534904766615,Labels:map[string]string{io.kubernetes.container.name:
etcd,io.kubernetes.pod.name: etcd-addons-823768,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 275be48abf35fb88c2ee76ac3fc80e7b,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e3f3a7d8000f3b8c196c9737b85b63fd0ded933be19720accb71ecafa96a061,PodSandboxId:04698fdd92bec31438bf337e8da0528ab4059b78c66cb3d0c7b96990e8fe8c0d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,State:CONTAINER_RUNNING,CreatedAt:1737385534885758903,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name
: kube-apiserver-addons-823768,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8efe2c5495b6ef47020c6e3bc5a82719,},Annotations:map[string]string{io.kubernetes.container.hash: bf915d6a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=31501b0e-a60f-46f0-82a0-8d738c20a088 name=/runtime.v1.RuntimeService/ListContainers
Jan 20 15:10:24 addons-823768 crio[664]: time="2025-01-20 15:10:24.157489158Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=8bda5514-7eb1-4980-96e1-79aced01d84e name=/runtime.v1.RuntimeService/Version
Jan 20 15:10:24 addons-823768 crio[664]: time="2025-01-20 15:10:24.157587785Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=8bda5514-7eb1-4980-96e1-79aced01d84e name=/runtime.v1.RuntimeService/Version
Jan 20 15:10:24 addons-823768 crio[664]: time="2025-01-20 15:10:24.159073364Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=04316000-d561-4c0f-94e1-feaad129e982 name=/runtime.v1.ImageService/ImageFsInfo
Jan 20 15:10:24 addons-823768 crio[664]: time="2025-01-20 15:10:24.160534626Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737385824160503840,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:566250,},InodesUsed:&UInt64Value{Value:196,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=04316000-d561-4c0f-94e1-feaad129e982 name=/runtime.v1.ImageService/ImageFsInfo
Jan 20 15:10:24 addons-823768 crio[664]: time="2025-01-20 15:10:24.161133560Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d9a4792b-3edc-47b5-8658-7375aaf6428f name=/runtime.v1.RuntimeService/ListContainers
Jan 20 15:10:24 addons-823768 crio[664]: time="2025-01-20 15:10:24.161191632Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d9a4792b-3edc-47b5-8658-7375aaf6428f name=/runtime.v1.RuntimeService/ListContainers
Jan 20 15:10:24 addons-823768 crio[664]: time="2025-01-20 15:10:24.161739766Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0d939b4caf08eb54350b5aa23e89fc667bec3d6ad2e1cdf53ad059f20a45fcfa,PodSandboxId:3a519efe9b0389c6f8eab8ad880d51a93a046e8802df3afcdb8044fa4726d513,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:679a5fd058f6ca754a561846fe27927e408074431d63556e8fc588fc38be6901,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:93f9c72967dbcfaffe724ae5ba471e9568c9bbe67271f53266c84f3c83a409e3,State:CONTAINER_RUNNING,CreatedAt:1737385685922917165,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 66c3042c-5ca2-4e67-bbd5-02c9c84af6ea,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:56341dccb27e24a4a5fc98e5f55f32e43b4612d8c50e4725891d1411f5b8f8e0,PodSandboxId:0cb85977d13d85fd6a9201f80992e86304ff371324a520e0f6964e407b9a26f8,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1737385635840126325,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 606ebe90-54f5-4442-a16c-ee4d7c99146e,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:739724295d0f28b5aba399118f926eba1fd21e87d8ad182fa2f4b987a5d1d769,PodSandboxId:178d147355f56c1d3a8d94715162d0ee2da23d3b793f2f6345198e404ae245d0,Metadata:&ContainerMetadata{Name:csi-snapshotter,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:738351fd438f02c0fa796f623f5ec066f7431608d8c20524e0a109871454298c,State:CONTAINER_RUNNING,CreatedAt:1737385632064789675,Labels:map[string]string{io.kubernetes.container.name: csi-snapshotter,io.kubernetes.pod.name: csi-hostpathplugin-gnx78,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c749cfac-9a22-4577-9180-7c6720645ff1,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: 9a80f5e9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b4f42aa5415589c65e76a6c25c417473a659e2087b71b7188d9b5c8610924786,PodSandboxId:178d147355f56c1d3a8d94715162d0ee2da23d3b793f2f6345198e404ae245d0,Metadata:&ContainerMetadata{Name:csi-provisioner,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:931dbfd16f87c10b33e6aa2f32ac2d1beef37111d14c94af014c2c76f9326992,State:CONTAINER_RUNNING,CreatedAt:1737385628632095336,Labels:map[string]string{io.kubernetes.container.name: csi-provisioner,io.kubernetes.pod.name: csi-hostpathplugin-gnx78,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c749cfac-9a22-4577-9180-7c6720645ff1,},Annotations:ma
p[string]string{io.kubernetes.container.hash: 743e34f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0564a081b1cc34a7b8cdc6412684e708f96dd1433148cf64bb8e4f1c1ecf5a0f,PodSandboxId:330b0828d8f12713c4adf1aa231d5655019f6893d855c80a706bcf4b0624c449,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:62b61c42ec8dd877b85c0aa24c4744ce44d274bc16cc5d2364edfe67964ba55b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ee44bc2368033ba6147d84fb376356de1e40e4778c20dd8b4817bd1636121ddf,State:CONTAINER_RUNNING,CreatedAt:1737385627013062212,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-56d7c84fd4-g5ctf,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 0eb150dd-16b
4-418a-9533-ef0140d258d1,},Annotations:map[string]string{io.kubernetes.container.hash: 4e8eee94,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:a2e97ce48722b8576dddc211a69a1502d5ffbe952c0e7c8b1640fa134ba01138,PodSandboxId:178d147355f56c1d3a8d94715162d0ee2da23d3b793f2f6345198e404ae245d0,Metadata:&ContainerMetadata{Name:liveness-probe,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6,Annotations:map[s
tring]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e899260153aedc3a54e6b11ee23f11d96a01236ccd556fbd0372a49d07a7bdb8,State:CONTAINER_RUNNING,CreatedAt:1737385619750759519,Labels:map[string]string{io.kubernetes.container.name: liveness-probe,io.kubernetes.pod.name: csi-hostpathplugin-gnx78,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c749cfac-9a22-4577-9180-7c6720645ff1,},Annotations:map[string]string{io.kubernetes.container.hash: 62375f0d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:559997a706e3d21cc0f967c4e3c5b13fd7c8457c954ff770c450939594b31dc1,PodSandboxId:178d147355f56c1d3a8d94715162d0ee2da23d3b793f2f6345198e404ae245d0,Metadata:&ContainerMetadata{Name:hostpath,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14
c9b22ea11,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e255e073c508c2fe6cd5b51ba718297863d8ab7a2b57edfdd620eae7e26a2167,State:CONTAINER_RUNNING,CreatedAt:1737385618462366745,Labels:map[string]string{io.kubernetes.container.name: hostpath,io.kubernetes.pod.name: csi-hostpathplugin-gnx78,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c749cfac-9a22-4577-9180-7c6720645ff1,},Annotations:map[string]string{io.kubernetes.container.hash: 70cab6f4,io.kubernetes.container.ports: [{\"name\":\"healthz\",\"containerPort\":9898,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c46266f6f3f8ef9d11ba293271f8f0d629a4c0309d944b15f8f0173022057e8,PodSandboxId:178d147355f56c1d3a8d94715162d0ee2da23d3b793f2f6345198e404ae245d0,Metadata:&ContainerMetadata{Name:node-driver-registrar,Attemp
t:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88ef14a257f4247460be80e11f16d5ed7cc19e765df128c71515d8d7327e64c1,State:CONTAINER_RUNNING,CreatedAt:1737385616837910370,Labels:map[string]string{io.kubernetes.container.name: node-driver-registrar,io.kubernetes.pod.name: csi-hostpathplugin-gnx78,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c749cfac-9a22-4577-9180-7c6720645ff1,},Annotations:map[string]string{io.kubernetes.container.hash: 880c5a9e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ebd723b834f324dc6f5d79d01008ca239ce662616b93fb36af3dfe42ea592637,PodSandboxId:36fc01dd96c91549174979997e6231c4b2239861774f2dcbf2f80f3f3f2099ae,Met
adata:&ContainerMetadata{Name:csi-resizer,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:19a639eda60f037e40b0cb441c26585857fe2ca83d07b2a979e8188c04a6192c,State:CONTAINER_RUNNING,CreatedAt:1737385615269923592,Labels:map[string]string{io.kubernetes.container.name: csi-resizer,io.kubernetes.pod.name: csi-hostpath-resizer-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ff9ae680-66e0-4d97-a31f-401bc2303326,},Annotations:map[string]string{io.kubernetes.container.hash: 204ff79e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:407fa55d66c41fac709ee91bebabef965f238ca3b9b15be99afda14b4fecaf15,PodSandboxId:178d147355f56c1d3a8d94715162d0ee2da23d3b793f2f
6345198e404ae245d0,Metadata:&ContainerMetadata{Name:csi-external-health-monitor-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1ed5895ba6353a897f269c4919c8249f176ba9d8719a585dc6ed3cd861fe0a3,State:CONTAINER_RUNNING,CreatedAt:1737385613775578911,Labels:map[string]string{io.kubernetes.container.name: csi-external-health-monitor-controller,io.kubernetes.pod.name: csi-hostpathplugin-gnx78,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c749cfac-9a22-4577-9180-7c6720645ff1,},Annotations:map[string]string{io.kubernetes.container.hash: db43d78f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4ac2dabca6c916adc9e
dda32b0363ca47e902dd4b75f3229b8342a6778b1303a,PodSandboxId:5d17831bf379ab07c9a08a5b80a83738eff6ddaff3b1cfa7b94d170eb020a7ff,Metadata:&ContainerMetadata{Name:csi-attacher,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:59cbb42146a373fccdb496ee1d8f7de9213c9690266417fa7c1ea2c72b7173eb,State:CONTAINER_RUNNING,CreatedAt:1737385611708442291,Labels:map[string]string{io.kubernetes.container.name: csi-attacher,io.kubernetes.pod.name: csi-hostpath-attacher-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 116b9f15-1304-49fb-9076-931a2afbb254,},Annotations:map[string]string{io.kubernetes.container.hash: 3d14b655,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Contai
ner{Id:44f6d3f5ab703369a329fc3d6bbb2ddaa949226698872beffb23c7017017b6c8,PodSandboxId:fdc33fccc554ab0db7d44daaa5c8a4259323a1ab26e0ae25a537254a1859e03f,Metadata:&ContainerMetadata{Name:patch,Attempt:1,},Image:&ImageSpec{Image:a62eeff05ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a62eeff05ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,State:CONTAINER_EXITED,CreatedAt:1737385610982473464,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-xh2h7,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: c7031b3e-6221-45ae-a4a8-b6ce4e152d6c,},Annotations:map[string]string{io.kubernetes.container.hash: 3f610496,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b2fa6de7c4e6267
1eed38d0a63f1dcacd3f7627ac46c4790794375099519f16,PodSandboxId:94a7aef39e7030754c1de18634ec36c8c03b19467b4b700a8cc9451af46ece1b,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a62eeff05ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,State:CONTAINER_EXITED,CreatedAt:1737385610147618531,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-6vqcs,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: ead14696-5c7b-44d7-8555-1fb2df92f8bb,},Annotations:map[string]string{io.kubernetes.container.hash: fe18a2bf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod:
30,},},&Container{Id:c5d3228e30e2faa76b93fb5a26c445e0079fd71b1d87d9fb6378ba129240201f,PodSandboxId:f8f344c34f1e024c68fd6bb08d8af02d87a892a9514c56b5eba663758ed4de08,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1737385607714889079,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-68b874b76f-wz6d5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cacd7ffe-a681-4acf-96f8-18ef261221a0,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.containe
r.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:36030becf6c98f72f5864b3e74473ee86f0e2eb64d69c8575baa7b897406fa39,PodSandboxId:aef77a63ec4ad09e8ba027cac4fa617d462a2525b0845c7772e01b9f5ef8b326,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1737385593848855247,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-68b874b76f-v9qfd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f5c996f-6eab-461e-ab1b-cd3349dd28b6,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f0ab0f9b0c6189698167001b6d245418b6802b84e5afe9f7c0e74dcbda65715f,PodSandboxId:718d4440e9db0be4cfbc40ba14155123e2844de74b15896f6bb022efc435031a,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:30dd67412fdea30479de8d5d9bf760870308d24d911c59ea1f1757f04c33cc29,State:CONTAINER_RUNNING,CreatedAt:1737385582447810606,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c004e6ed-e3c7-41fb-81db-143b10c8e7be,},Annotations:map[string]string{io.kubernetes.container.hash: 8778d474,io.kubernetes
.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97181205fe8bfde879ac7f4a51d4017aa7e53d2f40e7b72512e7e9aa2e3a1e73,PodSandboxId:ee23f086b04ce2e73d1bb935d3be0450f91dad52c7c41e3c2b06b09c6c6eda1f,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1737385571454606615,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-hd9wh,io.kubernetes.pod.namespace: kube-system,io.kubernet
es.pod.uid: 74d848dc-f26d-43fe-8a5a-a0df1659422e,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6eeabacb6e6ea1dc7ad4d246f72a67dffb217dd988427a4567055a6557c856b6,PodSandboxId:fca109f861e411be5babd275ed95b352404e992b5dab1a90ffe48a6b88e0a2e5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1737385553514978161,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
0e778f21-8d84-4dd3-a4d5-1d838a0c732a,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ad760d35635f6485572a0b6fa40042d426af15a7e58fe0ef324a32a6f6b2d71,PodSandboxId:136274a1e784b3609fa9ac2f30c16c605d79aaaec351df86a1af6d39917410fc,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1737385548832822328,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-5vcsv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 07cf3526-d1a7-45e9-a4b0-843c4c5
d8087,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a16679188eadce24a4479b62adc59f3b5c88a37585c863efbe419451befd1b66,PodSandboxId:3eb31a3186fcb8345cd502441425a9d283a8a1d871e01a73b1c6ab5ec24fcb1f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,State:CONTAINER_RUNNING,CreatedAt:1737385546514112240,Labels:ma
p[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7rvmm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ad2f5c6d-b93f-4390-876b-33132993d790,},Annotations:map[string]string{io.kubernetes.container.hash: 8f247ea6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e3453aa93d27fd339c4cbb350ff3ea39c5648c43373ff8d85ab0e791b5d5115,PodSandboxId:e3f2609c351e31ebb917c217ea604edfd1ed1153edd840da63d83816e8131c06,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,State:CONTAINER_RUNNING,CreatedAt:1737385534874575585,Labels:map[string]string{io.kubernetes
.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-823768,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fa89e65c0dd5eb66f20c370d80247ae1,},Annotations:map[string]string{io.kubernetes.container.hash: 8c4b12d6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:910f65c08fb23854cbdcc82ddb7f710ce49ea972c848f0c95dc0dd199743d1d6,PodSandboxId:717d4b555e17cb12eabcd9fd5f367ce049f51431cf06e13506fa2f01920eed0e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,State:CONTAINER_RUNNING,CreatedAt:1737385534860117722,Labels:map[string]string{io.kubernetes.contain
er.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-823768,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: af2eb10cf3914b699799b36cacd58b0d,},Annotations:map[string]string{io.kubernetes.container.hash: 99f3a73e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e011bb870926a0e81acf676ccea2f9b4bb99849944df7c7876e408162ada7e9,PodSandboxId:11457cc606696b4f6ca88c342b55bfa4fa55e299628c619bcfe5354d81f4da77,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1737385534904766615,Labels:map[string]string{io.kubernetes.container.name:
etcd,io.kubernetes.pod.name: etcd-addons-823768,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 275be48abf35fb88c2ee76ac3fc80e7b,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e3f3a7d8000f3b8c196c9737b85b63fd0ded933be19720accb71ecafa96a061,PodSandboxId:04698fdd92bec31438bf337e8da0528ab4059b78c66cb3d0c7b96990e8fe8c0d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,State:CONTAINER_RUNNING,CreatedAt:1737385534885758903,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name
: kube-apiserver-addons-823768,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8efe2c5495b6ef47020c6e3bc5a82719,},Annotations:map[string]string{io.kubernetes.container.hash: bf915d6a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d9a4792b-3edc-47b5-8658-7375aaf6428f name=/runtime.v1.RuntimeService/ListContainers
==> container status <==
CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID POD
0d939b4caf08e docker.io/library/nginx@sha256:679a5fd058f6ca754a561846fe27927e408074431d63556e8fc588fc38be6901 2 minutes ago Running nginx 0 3a519efe9b038 nginx
56341dccb27e2 gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e 3 minutes ago Running busybox 0 0cb85977d13d8 busybox
739724295d0f2 registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f 3 minutes ago Running csi-snapshotter 0 178d147355f56 csi-hostpathplugin-gnx78
b4f42aa541558 registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7 3 minutes ago Running csi-provisioner 0 178d147355f56 csi-hostpathplugin-gnx78
0564a081b1cc3 registry.k8s.io/ingress-nginx/controller@sha256:62b61c42ec8dd877b85c0aa24c4744ce44d274bc16cc5d2364edfe67964ba55b 3 minutes ago Running controller 0 330b0828d8f12 ingress-nginx-controller-56d7c84fd4-g5ctf
a2e97ce48722b registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6 3 minutes ago Running liveness-probe 0 178d147355f56 csi-hostpathplugin-gnx78
559997a706e3d registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11 3 minutes ago Running hostpath 0 178d147355f56 csi-hostpathplugin-gnx78
4c46266f6f3f8 registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc 3 minutes ago Running node-driver-registrar 0 178d147355f56 csi-hostpathplugin-gnx78
ebd723b834f32 registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8 3 minutes ago Running csi-resizer 0 36fc01dd96c91 csi-hostpath-resizer-0
407fa55d66c41 registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864 3 minutes ago Running csi-external-health-monitor-controller 0 178d147355f56 csi-hostpathplugin-gnx78
4ac2dabca6c91 registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0 3 minutes ago Running csi-attacher 0 5d17831bf379a csi-hostpath-attacher-0
44f6d3f5ab703 a62eeff05ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb 3 minutes ago Exited patch 1 fdc33fccc554a ingress-nginx-admission-patch-xh2h7
2b2fa6de7c4e6 registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f 3 minutes ago Exited create 0 94a7aef39e703 ingress-nginx-admission-create-6vqcs
c5d3228e30e2f registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922 3 minutes ago Running volume-snapshot-controller 0 f8f344c34f1e0 snapshot-controller-68b874b76f-wz6d5
36030becf6c98 registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922 3 minutes ago Running volume-snapshot-controller 0 aef77a63ec4ad snapshot-controller-68b874b76f-v9qfd
f0ab0f9b0c618 gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab 4 minutes ago Running minikube-ingress-dns 0 718d4440e9db0 kube-ingress-dns-minikube
97181205fe8bf docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f 4 minutes ago Running amd-gpu-device-plugin 0 ee23f086b04ce amd-gpu-device-plugin-hd9wh
6eeabacb6e6ea 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562 4 minutes ago Running storage-provisioner 0 fca109f861e41 storage-provisioner
3ad760d35635f c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6 4 minutes ago Running coredns 0 136274a1e784b coredns-668d6bf9bc-5vcsv
a16679188eadc 040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08 4 minutes ago Running kube-proxy 0 3eb31a3186fcb kube-proxy-7rvmm
3e011bb870926 a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc 4 minutes ago Running etcd 0 11457cc606696 etcd-addons-823768
2e3f3a7d8000f c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4 4 minutes ago Running kube-apiserver 0 04698fdd92bec kube-apiserver-addons-823768
2e3453aa93d27 a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5 4 minutes ago Running kube-scheduler 0 e3f2609c351e3 kube-scheduler-addons-823768
910f65c08fb23 8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3 4 minutes ago Running kube-controller-manager 0 717d4b555e17c kube-controller-manager-addons-823768
==> coredns [3ad760d35635f6485572a0b6fa40042d426af15a7e58fe0ef324a32a6f6b2d71] <==
[INFO] 10.244.0.8:56019 - 8659 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 109 false 1232" NXDOMAIN qr,aa,rd 179 0.000096646s
[INFO] 10.244.0.8:56019 - 15449 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000112548s
[INFO] 10.244.0.8:56019 - 48990 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000079748s
[INFO] 10.244.0.8:56019 - 33141 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000105011s
[INFO] 10.244.0.8:56019 - 62395 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000134282s
[INFO] 10.244.0.8:56019 - 6006 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000106164s
[INFO] 10.244.0.8:56019 - 8304 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000071529s
[INFO] 10.244.0.8:34548 - 65505 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000198464s
[INFO] 10.244.0.8:34548 - 65209 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000084361s
[INFO] 10.244.0.8:45732 - 48780 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000095991s
[INFO] 10.244.0.8:45732 - 48577 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000075237s
[INFO] 10.244.0.8:48111 - 44007 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000094432s
[INFO] 10.244.0.8:48111 - 44175 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000051812s
[INFO] 10.244.0.8:54661 - 45113 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.00011191s
[INFO] 10.244.0.8:54661 - 44955 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000125648s
[INFO] 10.244.0.23:32815 - 41479 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000317766s
[INFO] 10.244.0.23:55241 - 46997 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000113506s
[INFO] 10.244.0.23:32971 - 50582 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000126524s
[INFO] 10.244.0.23:56239 - 4615 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000106422s
[INFO] 10.244.0.23:46110 - 53295 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000146317s
[INFO] 10.244.0.23:57583 - 28036 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000092403s
[INFO] 10.244.0.23:41341 - 34430 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 648 0.001860332s
[INFO] 10.244.0.23:46756 - 21526 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.0017886s
[INFO] 10.244.0.28:40171 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000434828s
[INFO] 10.244.0.28:36379 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000227115s
==> describe nodes <==
Name: addons-823768
Roles: control-plane
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/arch=amd64
kubernetes.io/hostname=addons-823768
kubernetes.io/os=linux
minikube.k8s.io/commit=5361cb60dc81b84464882b386f50211c10a5a7cc
minikube.k8s.io/name=addons-823768
minikube.k8s.io/primary=true
minikube.k8s.io/updated_at=2025_01_20T15_05_40_0700
minikube.k8s.io/version=v1.35.0
node-role.kubernetes.io/control-plane=
node.kubernetes.io/exclude-from-external-load-balancers=
topology.hostpath.csi/node=addons-823768
Annotations: csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-823768"}
kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Mon, 20 Jan 2025 15:05:37 +0000
Taints: <none>
Unschedulable: false
Lease:
HolderIdentity: addons-823768
AcquireTime: <unset>
RenewTime: Mon, 20 Jan 2025 15:10:15 +0000
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
MemoryPressure False Mon, 20 Jan 2025 15:08:13 +0000 Mon, 20 Jan 2025 15:05:35 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Mon, 20 Jan 2025 15:08:13 +0000 Mon, 20 Jan 2025 15:05:35 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Mon, 20 Jan 2025 15:08:13 +0000 Mon, 20 Jan 2025 15:05:35 +0000 KubeletHasSufficientPID kubelet has sufficient PID available
Ready True Mon, 20 Jan 2025 15:08:13 +0000 Mon, 20 Jan 2025 15:05:40 +0000 KubeletReady kubelet is posting ready status
Addresses:
InternalIP: 192.168.39.158
Hostname: addons-823768
Capacity:
cpu: 2
ephemeral-storage: 17734596Ki
hugepages-2Mi: 0
memory: 3912780Ki
pods: 110
Allocatable:
cpu: 2
ephemeral-storage: 17734596Ki
hugepages-2Mi: 0
memory: 3912780Ki
pods: 110
System Info:
Machine ID: 2ed69cfbae1c49d5a2adeea9f9d7ada9
System UUID: 2ed69cfb-ae1c-49d5-a2ad-eea9f9d7ada9
Boot ID: 5745ae5a-4581-4558-8316-987961d0b42c
Kernel Version: 5.10.207
OS Image: Buildroot 2023.02.9
Operating System: linux
Architecture: amd64
Container Runtime Version: cri-o://1.29.1
Kubelet Version: v1.32.0
Kube-Proxy Version: v1.32.0
PodCIDR: 10.244.0.0/24
PodCIDRs: 10.244.0.0/24
Non-terminated Pods: (19 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age
--------- ---- ------------ ---------- --------------- ------------- ---
default busybox 0 (0%) 0 (0%) 0 (0%) 0 (0%) 3m10s
default hello-world-app-7d9564db4-njdj6 0 (0%) 0 (0%) 0 (0%) 0 (0%) 2s
default nginx 0 (0%) 0 (0%) 0 (0%) 0 (0%) 2m22s
default task-pv-pod 0 (0%) 0 (0%) 0 (0%) 0 (0%) 2m20s
ingress-nginx ingress-nginx-controller-56d7c84fd4-g5ctf 100m (5%) 0 (0%) 90Mi (2%) 0 (0%) 4m32s
kube-system amd-gpu-device-plugin-hd9wh 0 (0%) 0 (0%) 0 (0%) 0 (0%) 4m37s
kube-system coredns-668d6bf9bc-5vcsv 100m (5%) 0 (0%) 70Mi (1%) 170Mi (4%) 4m40s
kube-system csi-hostpath-attacher-0 0 (0%) 0 (0%) 0 (0%) 0 (0%) 4m29s
kube-system csi-hostpath-resizer-0 0 (0%) 0 (0%) 0 (0%) 0 (0%) 4m29s
kube-system csi-hostpathplugin-gnx78 0 (0%) 0 (0%) 0 (0%) 0 (0%) 4m29s
kube-system etcd-addons-823768 100m (5%) 0 (0%) 100Mi (2%) 0 (0%) 4m44s
kube-system kube-apiserver-addons-823768 250m (12%) 0 (0%) 0 (0%) 0 (0%) 4m44s
kube-system kube-controller-manager-addons-823768 200m (10%) 0 (0%) 0 (0%) 0 (0%) 4m46s
kube-system kube-ingress-dns-minikube 0 (0%) 0 (0%) 0 (0%) 0 (0%) 4m35s
kube-system kube-proxy-7rvmm 0 (0%) 0 (0%) 0 (0%) 0 (0%) 4m41s
kube-system kube-scheduler-addons-823768 100m (5%) 0 (0%) 0 (0%) 0 (0%) 4m45s
kube-system snapshot-controller-68b874b76f-v9qfd 0 (0%) 0 (0%) 0 (0%) 0 (0%) 4m31s
kube-system snapshot-controller-68b874b76f-wz6d5 0 (0%) 0 (0%) 0 (0%) 0 (0%) 4m31s
kube-system storage-provisioner 0 (0%) 0 (0%) 0 (0%) 0 (0%) 4m33s
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 850m (42%) 0 (0%)
memory 260Mi (6%) 170Mi (4%)
ephemeral-storage 0 (0%) 0 (0%)
hugepages-2Mi 0 (0%) 0 (0%)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Starting 4m36s kube-proxy
Normal Starting 4m45s kubelet Starting kubelet.
Normal NodeAllocatableEnforced 4m44s kubelet Updated Node Allocatable limit across pods
Normal NodeHasSufficientMemory 4m44s kubelet Node addons-823768 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 4m44s kubelet Node addons-823768 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 4m44s kubelet Node addons-823768 status is now: NodeHasSufficientPID
Normal NodeReady 4m44s kubelet Node addons-823768 status is now: NodeReady
Normal RegisteredNode 4m41s node-controller Node addons-823768 event: Registered Node addons-823768 in Controller
==> dmesg <==
[ +4.423973] systemd-fstab-generator[873]: Ignoring "noauto" option for root device
[ +0.581285] kauditd_printk_skb: 46 callbacks suppressed
[ +5.484443] systemd-fstab-generator[1224]: Ignoring "noauto" option for root device
[ +0.073928] kauditd_printk_skb: 41 callbacks suppressed
[ +5.273691] systemd-fstab-generator[1347]: Ignoring "noauto" option for root device
[ +0.162747] kauditd_printk_skb: 21 callbacks suppressed
[ +5.214655] kauditd_printk_skb: 108 callbacks suppressed
[ +5.210113] kauditd_printk_skb: 116 callbacks suppressed
[Jan20 15:06] kauditd_printk_skb: 110 callbacks suppressed
[ +19.139201] kauditd_printk_skb: 2 callbacks suppressed
[ +5.836806] kauditd_printk_skb: 7 callbacks suppressed
[ +13.754303] kauditd_printk_skb: 4 callbacks suppressed
[ +5.384646] kauditd_printk_skb: 56 callbacks suppressed
[ +5.188215] kauditd_printk_skb: 39 callbacks suppressed
[Jan20 15:07] kauditd_printk_skb: 30 callbacks suppressed
[ +5.189656] kauditd_printk_skb: 4 callbacks suppressed
[ +5.820035] kauditd_printk_skb: 7 callbacks suppressed
[ +5.687193] kauditd_printk_skb: 7 callbacks suppressed
[ +11.564951] kauditd_printk_skb: 2 callbacks suppressed
[ +6.553515] kauditd_printk_skb: 31 callbacks suppressed
[ +5.060585] kauditd_printk_skb: 51 callbacks suppressed
[ +7.298488] kauditd_printk_skb: 62 callbacks suppressed
[ +6.013101] kauditd_printk_skb: 4 callbacks suppressed
[Jan20 15:08] kauditd_printk_skb: 15 callbacks suppressed
[ +11.389449] kauditd_printk_skb: 25 callbacks suppressed
==> etcd [3e011bb870926a0e81acf676ccea2f9b4bb99849944df7c7876e408162ada7e9] <==
{"level":"warn","ts":"2025-01-20T15:06:53.622686Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-01-20T15:06:53.214746Z","time spent":"407.928378ms","remote":"127.0.0.1:33576","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":28,"request content":"key:\"/registry/pods\" limit:1 "}
{"level":"warn","ts":"2025-01-20T15:06:53.622815Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"128.236562ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
{"level":"info","ts":"2025-01-20T15:06:53.622921Z","caller":"traceutil/trace.go:171","msg":"trace[1179903400] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1026; }","duration":"128.363318ms","start":"2025-01-20T15:06:53.494550Z","end":"2025-01-20T15:06:53.622913Z","steps":["trace[1179903400] 'agreement among raft nodes before linearized reading' (duration: 128.237512ms)"],"step_count":1}
{"level":"info","ts":"2025-01-20T15:07:04.175891Z","caller":"traceutil/trace.go:171","msg":"trace[1687271508] linearizableReadLoop","detail":"{readStateIndex:1128; appliedIndex:1127; }","duration":"182.064453ms","start":"2025-01-20T15:07:03.993807Z","end":"2025-01-20T15:07:04.175872Z","steps":["trace[1687271508] 'read index received' (duration: 177.609883ms)","trace[1687271508] 'applied index is now lower than readState.Index' (duration: 4.453716ms)"],"step_count":2}
{"level":"warn","ts":"2025-01-20T15:07:04.176082Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"182.222905ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
{"level":"info","ts":"2025-01-20T15:07:04.176101Z","caller":"traceutil/trace.go:171","msg":"trace[892120133] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1095; }","duration":"182.312552ms","start":"2025-01-20T15:07:03.993783Z","end":"2025-01-20T15:07:04.176096Z","steps":["trace[892120133] 'agreement among raft nodes before linearized reading' (duration: 182.222961ms)"],"step_count":1}
{"level":"warn","ts":"2025-01-20T15:07:04.176369Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"140.332376ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
{"level":"info","ts":"2025-01-20T15:07:04.176412Z","caller":"traceutil/trace.go:171","msg":"trace[2055759117] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1095; }","duration":"140.400644ms","start":"2025-01-20T15:07:04.036004Z","end":"2025-01-20T15:07:04.176405Z","steps":["trace[2055759117] 'agreement among raft nodes before linearized reading' (duration: 140.338392ms)"],"step_count":1}
{"level":"info","ts":"2025-01-20T15:07:06.637079Z","caller":"traceutil/trace.go:171","msg":"trace[138422048] linearizableReadLoop","detail":"{readStateIndex:1135; appliedIndex:1134; }","duration":"144.033657ms","start":"2025-01-20T15:07:06.493032Z","end":"2025-01-20T15:07:06.637065Z","steps":["trace[138422048] 'read index received' (duration: 143.913692ms)","trace[138422048] 'applied index is now lower than readState.Index' (duration: 119.506µs)"],"step_count":2}
{"level":"info","ts":"2025-01-20T15:07:06.637381Z","caller":"traceutil/trace.go:171","msg":"trace[1309473886] transaction","detail":"{read_only:false; response_revision:1102; number_of_response:1; }","duration":"261.49564ms","start":"2025-01-20T15:07:06.375877Z","end":"2025-01-20T15:07:06.637373Z","steps":["trace[1309473886] 'process raft request' (duration: 261.110224ms)"],"step_count":1}
{"level":"warn","ts":"2025-01-20T15:07:06.637533Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"144.488772ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
{"level":"info","ts":"2025-01-20T15:07:06.637569Z","caller":"traceutil/trace.go:171","msg":"trace[673675145] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1102; }","duration":"144.554654ms","start":"2025-01-20T15:07:06.493009Z","end":"2025-01-20T15:07:06.637563Z","steps":["trace[673675145] 'agreement among raft nodes before linearized reading' (duration: 144.494071ms)"],"step_count":1}
{"level":"warn","ts":"2025-01-20T15:07:06.637663Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"142.810086ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
{"level":"info","ts":"2025-01-20T15:07:06.637693Z","caller":"traceutil/trace.go:171","msg":"trace[790918003] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1102; }","duration":"142.850687ms","start":"2025-01-20T15:07:06.494838Z","end":"2025-01-20T15:07:06.637689Z","steps":["trace[790918003] 'agreement among raft nodes before linearized reading' (duration: 142.808266ms)"],"step_count":1}
{"level":"warn","ts":"2025-01-20T15:07:06.639008Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"102.824656ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
{"level":"info","ts":"2025-01-20T15:07:06.639058Z","caller":"traceutil/trace.go:171","msg":"trace[1373366693] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1102; }","duration":"102.929966ms","start":"2025-01-20T15:07:06.536120Z","end":"2025-01-20T15:07:06.639050Z","steps":["trace[1373366693] 'agreement among raft nodes before linearized reading' (duration: 102.854164ms)"],"step_count":1}
{"level":"warn","ts":"2025-01-20T15:07:06.816709Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"102.709345ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
{"level":"info","ts":"2025-01-20T15:07:06.816810Z","caller":"traceutil/trace.go:171","msg":"trace[710433880] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1102; }","duration":"102.836863ms","start":"2025-01-20T15:07:06.713959Z","end":"2025-01-20T15:07:06.816796Z","steps":["trace[710433880] 'range keys from in-memory index tree' (duration: 102.636914ms)"],"step_count":1}
{"level":"info","ts":"2025-01-20T15:07:38.235473Z","caller":"traceutil/trace.go:171","msg":"trace[541629752] transaction","detail":"{read_only:false; number_of_response:1; response_revision:1316; }","duration":"424.155374ms","start":"2025-01-20T15:07:37.811290Z","end":"2025-01-20T15:07:38.235445Z","steps":["trace[541629752] 'process raft request' (duration: 424.050614ms)"],"step_count":1}
{"level":"warn","ts":"2025-01-20T15:07:38.235864Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-01-20T15:07:37.811275Z","time spent":"424.383764ms","remote":"127.0.0.1:33794","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":67,"response count":0,"response size":41,"request content":"compare:<target:MOD key:\"/registry/daemonsets/kube-system/nvidia-device-plugin-daemonset\" mod_revision:865 > success:<request_delete_range:<key:\"/registry/daemonsets/kube-system/nvidia-device-plugin-daemonset\" > > failure:<request_range:<key:\"/registry/daemonsets/kube-system/nvidia-device-plugin-daemonset\" > >"}
{"level":"info","ts":"2025-01-20T15:07:38.236302Z","caller":"traceutil/trace.go:171","msg":"trace[1154709480] linearizableReadLoop","detail":"{readStateIndex:1357; appliedIndex:1357; }","duration":"294.273699ms","start":"2025-01-20T15:07:37.942019Z","end":"2025-01-20T15:07:38.236292Z","steps":["trace[1154709480] 'read index received' (duration: 294.270262ms)","trace[1154709480] 'applied index is now lower than readState.Index' (duration: 2.637µs)"],"step_count":2}
{"level":"warn","ts":"2025-01-20T15:07:38.237026Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"294.993346ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
{"level":"info","ts":"2025-01-20T15:07:38.237394Z","caller":"traceutil/trace.go:171","msg":"trace[1054917839] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1316; }","duration":"295.389592ms","start":"2025-01-20T15:07:37.941996Z","end":"2025-01-20T15:07:38.237385Z","steps":["trace[1054917839] 'agreement among raft nodes before linearized reading' (duration: 294.979507ms)"],"step_count":1}
{"level":"warn","ts":"2025-01-20T15:07:38.237161Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"232.437044ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
{"level":"info","ts":"2025-01-20T15:07:38.237673Z","caller":"traceutil/trace.go:171","msg":"trace[2072934601] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1316; }","duration":"232.963958ms","start":"2025-01-20T15:07:38.004697Z","end":"2025-01-20T15:07:38.237661Z","steps":["trace[2072934601] 'agreement among raft nodes before linearized reading' (duration: 232.442407ms)"],"step_count":1}
==> kernel <==
15:10:24 up 5 min, 0 users, load average: 1.07, 1.63, 0.85
Linux addons-823768 5.10.207 #1 SMP Tue Jan 14 08:15:54 UTC 2025 x86_64 GNU/Linux
PRETTY_NAME="Buildroot 2023.02.9"
==> kube-apiserver [2e3f3a7d8000f3b8c196c9737b85b63fd0ded933be19720accb71ecafa96a061] <==
W0120 15:06:37.744311 1 handler_proxy.go:99] no RequestInfo found in the context
E0120 15:06:37.744398 1 controller.go:102] "Unhandled Error" err=<
loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
> logger="UnhandledError"
I0120 15:06:37.745499 1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
I0120 15:06:37.745571 1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
E0120 15:06:41.753217 1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.100.184.109:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.100.184.109:443/apis/metrics.k8s.io/v1beta1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" logger="UnhandledError"
W0120 15:06:41.753511 1 handler_proxy.go:99] no RequestInfo found in the context
E0120 15:06:41.753603 1 controller.go:146] "Unhandled Error" err=<
Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
> logger="UnhandledError"
I0120 15:06:41.754487 1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
E0120 15:06:41.791340 1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: Operation cannot be fulfilled on apiservices.apiregistration.k8s.io \"v1beta1.metrics.k8s.io\": the object has been modified; please apply your changes to the latest version and try again" logger="UnhandledError"
E0120 15:07:22.363530 1 conn.go:339] Error on socket receive: read tcp 192.168.39.158:8443->192.168.39.1:54916: use of closed network connection
E0120 15:07:22.559816 1 conn.go:339] Error on socket receive: read tcp 192.168.39.158:8443->192.168.39.1:54946: use of closed network connection
I0120 15:07:32.120012 1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.102.207.216"}
I0120 15:07:57.385722 1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
W0120 15:07:58.513579 1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
E0120 15:07:58.845666 1 authentication.go:74] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
I0120 15:08:02.223494 1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
I0120 15:08:02.404784 1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.103.212.216"}
I0120 15:08:42.776322 1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
I0120 15:10:22.830795 1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.110.53.157"}
==> kube-controller-manager [910f65c08fb23854cbdcc82ddb7f710ce49ea972c848f0c95dc0dd199743d1d6] <==
E0120 15:08:26.413138 1 metadata.go:231] "The watchlist request ended with an error, falling back to the standard LIST semantics" err="daemonsets.apps is forbidden: User \"system:serviceaccount:kube-system:namespace-controller\" cannot watch resource \"daemonsets\" in API group \"apps\" in the namespace \"local-path-storage\"" logger="namespace-controller" resource="apps/v1, Resource=daemonsets"
E0120 15:08:26.417807 1 metadata.go:231] "The watchlist request ended with an error, falling back to the standard LIST semantics" err="replicationcontrollers is forbidden: User \"system:serviceaccount:kube-system:namespace-controller\" cannot watch resource \"replicationcontrollers\" in API group \"\" in the namespace \"local-path-storage\"" logger="namespace-controller" resource="/v1, Resource=replicationcontrollers"
E0120 15:08:26.421903 1 metadata.go:231] "The watchlist request ended with an error, falling back to the standard LIST semantics" err="secrets is forbidden: User \"system:serviceaccount:kube-system:namespace-controller\" cannot watch resource \"secrets\" in API group \"\" in the namespace \"local-path-storage\"" logger="namespace-controller" resource="/v1, Resource=secrets"
E0120 15:08:26.426042 1 metadata.go:231] "The watchlist request ended with an error, falling back to the standard LIST semantics" err="replicasets.apps is forbidden: User \"system:serviceaccount:kube-system:namespace-controller\" cannot watch resource \"replicasets\" in API group \"apps\" in the namespace \"local-path-storage\"" logger="namespace-controller" resource="apps/v1, Resource=replicasets"
I0120 15:08:31.434321 1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="local-path-storage"
W0120 15:08:32.388018 1 reflector.go:362] The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking, err = the server could not find the requested resource
E0120 15:08:32.389024 1 metadata.go:231] "The watchlist request ended with an error, falling back to the standard LIST semantics" err="the server could not find the requested resource" resource="gadget.kinvolk.io/v1alpha1, Resource=traces"
W0120 15:08:32.390074 1 reflector.go:569] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0120 15:08:32.390107 1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
W0120 15:08:58.482023 1 reflector.go:362] The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking, err = the server could not find the requested resource
E0120 15:08:58.482933 1 metadata.go:231] "The watchlist request ended with an error, falling back to the standard LIST semantics" err="the server could not find the requested resource" resource="gadget.kinvolk.io/v1alpha1, Resource=traces"
W0120 15:08:58.483988 1 reflector.go:569] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0120 15:08:58.484035 1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
W0120 15:09:28.638875 1 reflector.go:362] The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking, err = the server could not find the requested resource
E0120 15:09:28.639966 1 metadata.go:231] "The watchlist request ended with an error, falling back to the standard LIST semantics" err="the server could not find the requested resource" resource="gadget.kinvolk.io/v1alpha1, Resource=traces"
W0120 15:09:28.640878 1 reflector.go:569] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0120 15:09:28.640978 1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
W0120 15:10:20.359661 1 reflector.go:362] The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking, err = the server could not find the requested resource
E0120 15:10:20.360969 1 metadata.go:231] "The watchlist request ended with an error, falling back to the standard LIST semantics" err="the server could not find the requested resource" resource="gadget.kinvolk.io/v1alpha1, Resource=traces"
W0120 15:10:20.361955 1 reflector.go:569] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0120 15:10:20.362017 1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
I0120 15:10:22.658421 1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-7d9564db4" duration="44.48692ms"
I0120 15:10:22.682540 1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-7d9564db4" duration="24.006785ms"
I0120 15:10:22.682624 1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-7d9564db4" duration="34.34µs"
I0120 15:10:22.707043 1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-7d9564db4" duration="39.409µs"
==> kube-proxy [a16679188eadce24a4479b62adc59f3b5c88a37585c863efbe419451befd1b66] <==
add table ip kube-proxy
^^^^^^^^^^^^^^^^^^^^^^^^
>
E0120 15:05:47.687355 1 proxier.go:733] "Error cleaning up nftables rules" err=<
could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
add table ip6 kube-proxy
^^^^^^^^^^^^^^^^^^^^^^^^^
>
I0120 15:05:47.705944 1 server.go:698] "Successfully retrieved node IP(s)" IPs=["192.168.39.158"]
E0120 15:05:47.706029 1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
I0120 15:05:47.853450 1 server_linux.go:147] "No iptables support for family" ipFamily="IPv6"
I0120 15:05:47.853525 1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
I0120 15:05:47.855376 1 server_linux.go:170] "Using iptables Proxier"
I0120 15:05:47.891958 1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
I0120 15:05:47.892217 1 server.go:497] "Version info" version="v1.32.0"
I0120 15:05:47.892280 1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I0120 15:05:47.910195 1 config.go:199] "Starting service config controller"
I0120 15:05:47.910308 1 shared_informer.go:313] Waiting for caches to sync for service config
I0120 15:05:47.910348 1 config.go:105] "Starting endpoint slice config controller"
I0120 15:05:47.910353 1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
I0120 15:05:47.916092 1 config.go:329] "Starting node config controller"
I0120 15:05:47.916125 1 shared_informer.go:313] Waiting for caches to sync for node config
I0120 15:05:48.012507 1 shared_informer.go:320] Caches are synced for endpoint slice config
I0120 15:05:48.012551 1 shared_informer.go:320] Caches are synced for service config
I0120 15:05:48.021389 1 shared_informer.go:320] Caches are synced for node config
==> kube-scheduler [2e3453aa93d27fd339c4cbb350ff3ea39c5648c43373ff8d85ab0e791b5d5115] <==
E0120 15:05:37.227928 1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
W0120 15:05:37.226486 1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
E0120 15:05:37.227950 1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
E0120 15:05:37.225889 1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
W0120 15:05:37.228648 1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
E0120 15:05:37.228765 1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
W0120 15:05:38.059356 1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
E0120 15:05:38.059463 1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
W0120 15:05:38.140355 1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
E0120 15:05:38.140406 1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
W0120 15:05:38.152627 1 reflector.go:569] runtime/asm_amd64.s:1700: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0120 15:05:38.152684 1 reflector.go:166] "Unhandled Error" err="runtime/asm_amd64.s:1700: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
W0120 15:05:38.196083 1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
E0120 15:05:38.196182 1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
W0120 15:05:38.358544 1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
E0120 15:05:38.358698 1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
W0120 15:05:38.425806 1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
E0120 15:05:38.425899 1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
W0120 15:05:38.480160 1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
E0120 15:05:38.480210 1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
W0120 15:05:38.527483 1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
E0120 15:05:38.527585 1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
W0120 15:05:38.532584 1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
E0120 15:05:38.532947 1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
I0120 15:05:40.619676 1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
==> kubelet <==
Jan 20 15:09:30 addons-823768 kubelet[1231]: E0120 15:09:30.247076 1231 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737385770246571745,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:566250,},InodesUsed:&UInt64Value{Value:196,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
Jan 20 15:09:34 addons-823768 kubelet[1231]: E0120 15:09:34.960929 1231 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"task-pv-container\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="default/task-pv-pod" podUID="d9fc6739-b7c7-4b24-a4d5-a049a13f7d8a"
Jan 20 15:09:39 addons-823768 kubelet[1231]: E0120 15:09:39.983153 1231 iptables.go:577] "Could not set up iptables canary" err=<
Jan 20 15:09:39 addons-823768 kubelet[1231]: error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
Jan 20 15:09:39 addons-823768 kubelet[1231]: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
Jan 20 15:09:39 addons-823768 kubelet[1231]: Perhaps ip6tables or your kernel needs to be upgraded.
Jan 20 15:09:39 addons-823768 kubelet[1231]: > table="nat" chain="KUBE-KUBELET-CANARY"
Jan 20 15:09:40 addons-823768 kubelet[1231]: E0120 15:09:40.250459 1231 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737385780249884823,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:566250,},InodesUsed:&UInt64Value{Value:196,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
Jan 20 15:09:40 addons-823768 kubelet[1231]: E0120 15:09:40.250632 1231 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737385780249884823,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:566250,},InodesUsed:&UInt64Value{Value:196,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
Jan 20 15:09:50 addons-823768 kubelet[1231]: E0120 15:09:50.253582 1231 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737385790253140080,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:566250,},InodesUsed:&UInt64Value{Value:196,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
Jan 20 15:09:50 addons-823768 kubelet[1231]: E0120 15:09:50.253613 1231 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737385790253140080,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:566250,},InodesUsed:&UInt64Value{Value:196,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
Jan 20 15:10:00 addons-823768 kubelet[1231]: E0120 15:10:00.258653 1231 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737385800257701607,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:566250,},InodesUsed:&UInt64Value{Value:196,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
Jan 20 15:10:00 addons-823768 kubelet[1231]: E0120 15:10:00.258759 1231 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737385800257701607,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:566250,},InodesUsed:&UInt64Value{Value:196,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
Jan 20 15:10:04 addons-823768 kubelet[1231]: I0120 15:10:04.960885 1231 kubelet_pods.go:1021] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
Jan 20 15:10:09 addons-823768 kubelet[1231]: I0120 15:10:09.961174 1231 kubelet_pods.go:1021] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/amd-gpu-device-plugin-hd9wh" secret="" err="secret \"gcp-auth\" not found"
Jan 20 15:10:10 addons-823768 kubelet[1231]: E0120 15:10:10.262958 1231 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737385810262380730,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:566250,},InodesUsed:&UInt64Value{Value:196,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
Jan 20 15:10:10 addons-823768 kubelet[1231]: E0120 15:10:10.263068 1231 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737385810262380730,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:566250,},InodesUsed:&UInt64Value{Value:196,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
Jan 20 15:10:18 addons-823768 kubelet[1231]: E0120 15:10:18.601818 1231 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" image="docker.io/nginx:latest"
Jan 20 15:10:18 addons-823768 kubelet[1231]: E0120 15:10:18.602145 1231 kuberuntime_image.go:55] "Failed to pull image" err="reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" image="docker.io/nginx:latest"
Jan 20 15:10:18 addons-823768 kubelet[1231]: E0120 15:10:18.602423 1231 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:task-pv-container,Image:docker.io/nginx,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:http-server,HostPort:0,ContainerPort:80,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:task-pv-storage,ReadOnly:false,MountPath:/usr/share/nginx/html,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gr84p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationM
essagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod task-pv-pod_default(d9fc6739-b7c7-4b24-a4d5-a049a13f7d8a): ErrImagePull: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" logger="UnhandledError"
Jan 20 15:10:18 addons-823768 kubelet[1231]: E0120 15:10:18.603674 1231 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"task-pv-container\" with ErrImagePull: \"reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="default/task-pv-pod" podUID="d9fc6739-b7c7-4b24-a4d5-a049a13f7d8a"
Jan 20 15:10:20 addons-823768 kubelet[1231]: E0120 15:10:20.265608 1231 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737385820265041453,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:566250,},InodesUsed:&UInt64Value{Value:196,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
Jan 20 15:10:20 addons-823768 kubelet[1231]: E0120 15:10:20.265653 1231 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1737385820265041453,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:566250,},InodesUsed:&UInt64Value{Value:196,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
Jan 20 15:10:22 addons-823768 kubelet[1231]: I0120 15:10:22.668184 1231 memory_manager.go:355] "RemoveStaleState removing state" podUID="3d8d1e3d-79bc-45d9-ab92-9203d7b75946" containerName="local-path-provisioner"
Jan 20 15:10:22 addons-823768 kubelet[1231]: I0120 15:10:22.754051 1231 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pbczl\" (UniqueName: \"kubernetes.io/projected/11eae7f9-7cd6-44da-b989-0b800a978cc2-kube-api-access-pbczl\") pod \"hello-world-app-7d9564db4-njdj6\" (UID: \"11eae7f9-7cd6-44da-b989-0b800a978cc2\") " pod="default/hello-world-app-7d9564db4-njdj6"
==> storage-provisioner [6eeabacb6e6ea1dc7ad4d246f72a67dffb217dd988427a4567055a6557c856b6] <==
I0120 15:05:54.184778 1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
I0120 15:05:54.216206 1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
I0120 15:05:54.216325 1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
I0120 15:05:54.246555 1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
I0120 15:05:54.246676 1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-823768_e27a1a2c-1ff1-4646-9ec6-62e7ff9ab0b7!
I0120 15:05:54.250017 1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"d7873a08-e047-4b60-90dd-2fa00f314b75", APIVersion:"v1", ResourceVersion:"708", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-823768_e27a1a2c-1ff1-4646-9ec6-62e7ff9ab0b7 became leader
I0120 15:05:54.347624 1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-823768_e27a1a2c-1ff1-4646-9ec6-62e7ff9ab0b7!
-- /stdout --
helpers_test.go:254: (dbg) Run: out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-823768 -n addons-823768
helpers_test.go:261: (dbg) Run: kubectl --context addons-823768 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: hello-world-app-7d9564db4-njdj6 task-pv-pod ingress-nginx-admission-create-6vqcs ingress-nginx-admission-patch-xh2h7
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:277: (dbg) Run: kubectl --context addons-823768 describe pod hello-world-app-7d9564db4-njdj6 task-pv-pod ingress-nginx-admission-create-6vqcs ingress-nginx-admission-patch-xh2h7
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-823768 describe pod hello-world-app-7d9564db4-njdj6 task-pv-pod ingress-nginx-admission-create-6vqcs ingress-nginx-admission-patch-xh2h7: exit status 1 (75.310693ms)
-- stdout --
Name: hello-world-app-7d9564db4-njdj6
Namespace: default
Priority: 0
Service Account: default
Node: addons-823768/192.168.39.158
Start Time: Mon, 20 Jan 2025 15:10:22 +0000
Labels: app=hello-world-app
pod-template-hash=7d9564db4
Annotations: <none>
Status: Pending
IP:
IPs: <none>
Controlled By: ReplicaSet/hello-world-app-7d9564db4
Containers:
hello-world-app:
Container ID:
Image: docker.io/kicbase/echo-server:1.0
Image ID:
Port: 8080/TCP
Host Port: 0/TCP
State: Waiting
Reason: ContainerCreating
Ready: False
Restart Count: 0
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-pbczl (ro)
Conditions:
Type Status
PodReadyToStartContainers False
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
kube-api-access-pbczl:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 3s default-scheduler Successfully assigned default/hello-world-app-7d9564db4-njdj6 to addons-823768
Normal Pulling 2s kubelet Pulling image "docker.io/kicbase/echo-server:1.0"
Name: task-pv-pod
Namespace: default
Priority: 0
Service Account: default
Node: addons-823768/192.168.39.158
Start Time: Mon, 20 Jan 2025 15:08:04 +0000
Labels: app=task-pv-pod
Annotations: <none>
Status: Pending
IP: 10.244.0.31
IPs:
IP: 10.244.0.31
Containers:
task-pv-container:
Container ID:
Image: docker.io/nginx
Image ID:
Port: 80/TCP
Host Port: 0/TCP
State: Waiting
Reason: ImagePullBackOff
Ready: False
Restart Count: 0
Environment: <none>
Mounts:
/usr/share/nginx/html from task-pv-storage (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-gr84p (ro)
Conditions:
Type Status
PodReadyToStartContainers True
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
task-pv-storage:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: hpvc
ReadOnly: false
kube-api-access-gr84p:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 2m21s default-scheduler Successfully assigned default/task-pv-pod to addons-823768
Warning Failed 108s kubelet Failed to pull image "docker.io/nginx": initializing source docker://nginx:latest: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
Normal BackOff 51s (x2 over 108s) kubelet Back-off pulling image "docker.io/nginx"
Warning Failed 51s (x2 over 108s) kubelet Error: ImagePullBackOff
Normal Pulling 38s (x3 over 2m19s) kubelet Pulling image "docker.io/nginx"
Warning Failed 7s (x3 over 108s) kubelet Error: ErrImagePull
Warning Failed 7s (x2 over 66s) kubelet Failed to pull image "docker.io/nginx": reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
-- /stdout --
** stderr **
Error from server (NotFound): pods "ingress-nginx-admission-create-6vqcs" not found
Error from server (NotFound): pods "ingress-nginx-admission-patch-xh2h7" not found
** /stderr **
helpers_test.go:279: kubectl --context addons-823768 describe pod hello-world-app-7d9564db4-njdj6 task-pv-pod ingress-nginx-admission-create-6vqcs ingress-nginx-admission-patch-xh2h7: exit status 1
addons_test.go:992: (dbg) Run: out/minikube-linux-amd64 -p addons-823768 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-823768 addons disable ingress-dns --alsologtostderr -v=1: (1.511424985s)
addons_test.go:992: (dbg) Run: out/minikube-linux-amd64 -p addons-823768 addons disable ingress --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-823768 addons disable ingress --alsologtostderr -v=1: (7.797746092s)
--- FAIL: TestAddons/parallel/Ingress (152.82s)