=== RUN TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress
=== CONT TestAddons/parallel/Ingress
addons_test.go:207: (dbg) Run: kubectl --context addons-411768 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:232: (dbg) Run: kubectl --context addons-411768 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:245: (dbg) Run: kubectl --context addons-411768 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [c6fdc475-449b-4a8c-a72c-3d42ef531b1c] Pending
helpers_test.go:344: "nginx" [c6fdc475-449b-4a8c-a72c-3d42ef531b1c] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [c6fdc475-449b-4a8c-a72c-3d42ef531b1c] Running
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 10.00420126s
I0414 16:34:17.622860 156633 kapi.go:150] Service nginx in namespace default found.
addons_test.go:262: (dbg) Run: out/minikube-linux-amd64 -p addons-411768 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-411768 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m9.218295237s)
** stderr **
ssh: Process exited with status 28
** /stderr **
addons_test.go:278: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:286: (dbg) Run: kubectl --context addons-411768 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:291: (dbg) Run: out/minikube-linux-amd64 -p addons-411768 ip
addons_test.go:297: (dbg) Run: nslookup hello-john.test 192.168.39.237
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run: out/minikube-linux-amd64 status --format={{.Host}} -p addons-411768 -n addons-411768
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======> post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run: out/minikube-linux-amd64 -p addons-411768 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-411768 logs -n 25: (1.399880948s)
helpers_test.go:252: TestAddons/parallel/Ingress logs:
-- stdout --
==> Audit <==
|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
| Command | Args | Profile | User | Version | Start Time | End Time |
|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
| delete | -p download-only-356094 | download-only-356094 | jenkins | v1.35.0 | 14 Apr 25 16:31 UTC | 14 Apr 25 16:31 UTC |
| delete | -p download-only-383049 | download-only-383049 | jenkins | v1.35.0 | 14 Apr 25 16:31 UTC | 14 Apr 25 16:31 UTC |
| delete | -p download-only-356094 | download-only-356094 | jenkins | v1.35.0 | 14 Apr 25 16:31 UTC | 14 Apr 25 16:31 UTC |
| start | --download-only -p | binary-mirror-396839 | jenkins | v1.35.0 | 14 Apr 25 16:31 UTC | |
| | binary-mirror-396839 | | | | | |
| | --alsologtostderr | | | | | |
| | --binary-mirror | | | | | |
| | http://127.0.0.1:35155 | | | | | |
| | --driver=kvm2 | | | | | |
| | --container-runtime=crio | | | | | |
| delete | -p binary-mirror-396839 | binary-mirror-396839 | jenkins | v1.35.0 | 14 Apr 25 16:31 UTC | 14 Apr 25 16:31 UTC |
| addons | enable dashboard -p | addons-411768 | jenkins | v1.35.0 | 14 Apr 25 16:31 UTC | |
| | addons-411768 | | | | | |
| addons | disable dashboard -p | addons-411768 | jenkins | v1.35.0 | 14 Apr 25 16:31 UTC | |
| | addons-411768 | | | | | |
| start | -p addons-411768 --wait=true | addons-411768 | jenkins | v1.35.0 | 14 Apr 25 16:31 UTC | 14 Apr 25 16:33 UTC |
| | --memory=4000 --alsologtostderr | | | | | |
| | --addons=registry | | | | | |
| | --addons=metrics-server | | | | | |
| | --addons=volumesnapshots | | | | | |
| | --addons=csi-hostpath-driver | | | | | |
| | --addons=gcp-auth | | | | | |
| | --addons=cloud-spanner | | | | | |
| | --addons=inspektor-gadget | | | | | |
| | --addons=nvidia-device-plugin | | | | | |
| | --addons=yakd --addons=volcano | | | | | |
| | --addons=amd-gpu-device-plugin | | | | | |
| | --driver=kvm2 | | | | | |
| | --container-runtime=crio | | | | | |
| | --addons=ingress | | | | | |
| | --addons=ingress-dns | | | | | |
| | --addons=storage-provisioner-rancher | | | | | |
| addons | addons-411768 addons disable | addons-411768 | jenkins | v1.35.0 | 14 Apr 25 16:33 UTC | 14 Apr 25 16:33 UTC |
| | volcano --alsologtostderr -v=1 | | | | | |
| addons | addons-411768 addons disable | addons-411768 | jenkins | v1.35.0 | 14 Apr 25 16:33 UTC | 14 Apr 25 16:33 UTC |
| | gcp-auth --alsologtostderr | | | | | |
| | -v=1 | | | | | |
| addons | enable headlamp | addons-411768 | jenkins | v1.35.0 | 14 Apr 25 16:33 UTC | 14 Apr 25 16:33 UTC |
| | -p addons-411768 | | | | | |
| | --alsologtostderr -v=1 | | | | | |
| addons | addons-411768 addons | addons-411768 | jenkins | v1.35.0 | 14 Apr 25 16:33 UTC | 14 Apr 25 16:33 UTC |
| | disable metrics-server | | | | | |
| | --alsologtostderr -v=1 | | | | | |
| addons | addons-411768 addons | addons-411768 | jenkins | v1.35.0 | 14 Apr 25 16:34 UTC | 14 Apr 25 16:34 UTC |
| | disable inspektor-gadget | | | | | |
| | --alsologtostderr -v=1 | | | | | |
| addons | addons-411768 addons disable | addons-411768 | jenkins | v1.35.0 | 14 Apr 25 16:34 UTC | 14 Apr 25 16:34 UTC |
| | headlamp --alsologtostderr | | | | | |
| | -v=1 | | | | | |
| ip | addons-411768 ip | addons-411768 | jenkins | v1.35.0 | 14 Apr 25 16:34 UTC | 14 Apr 25 16:34 UTC |
| addons | addons-411768 addons disable | addons-411768 | jenkins | v1.35.0 | 14 Apr 25 16:34 UTC | 14 Apr 25 16:34 UTC |
| | registry --alsologtostderr | | | | | |
| | -v=1 | | | | | |
| addons | addons-411768 addons disable | addons-411768 | jenkins | v1.35.0 | 14 Apr 25 16:34 UTC | 14 Apr 25 16:34 UTC |
| | yakd --alsologtostderr -v=1 | | | | | |
| ssh | addons-411768 ssh curl -s | addons-411768 | jenkins | v1.35.0 | 14 Apr 25 16:34 UTC | |
| | http://127.0.0.1/ -H 'Host: | | | | | |
| | nginx.example.com' | | | | | |
| ssh | addons-411768 ssh cat | addons-411768 | jenkins | v1.35.0 | 14 Apr 25 16:34 UTC | 14 Apr 25 16:34 UTC |
| | /opt/local-path-provisioner/pvc-0223bcea-7c20-4f57-890f-2ceeb26fd209_default_test-pvc/file1 | | | | | |
| addons | addons-411768 addons disable | addons-411768 | jenkins | v1.35.0 | 14 Apr 25 16:34 UTC | 14 Apr 25 16:34 UTC |
| | storage-provisioner-rancher | | | | | |
| | --alsologtostderr -v=1 | | | | | |
| addons | addons-411768 addons | addons-411768 | jenkins | v1.35.0 | 14 Apr 25 16:34 UTC | 14 Apr 25 16:34 UTC |
| | disable nvidia-device-plugin | | | | | |
| | --alsologtostderr -v=1 | | | | | |
| addons | addons-411768 addons | addons-411768 | jenkins | v1.35.0 | 14 Apr 25 16:34 UTC | 14 Apr 25 16:34 UTC |
| | disable cloud-spanner | | | | | |
| | --alsologtostderr -v=1 | | | | | |
| addons | addons-411768 addons | addons-411768 | jenkins | v1.35.0 | 14 Apr 25 16:34 UTC | 14 Apr 25 16:34 UTC |
| | disable volumesnapshots | | | | | |
| | --alsologtostderr -v=1 | | | | | |
| addons | addons-411768 addons | addons-411768 | jenkins | v1.35.0 | 14 Apr 25 16:34 UTC | 14 Apr 25 16:34 UTC |
| | disable csi-hostpath-driver | | | | | |
| | --alsologtostderr -v=1 | | | | | |
| ip | addons-411768 ip | addons-411768 | jenkins | v1.35.0 | 14 Apr 25 16:36 UTC | 14 Apr 25 16:36 UTC |
|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
==> Last Start <==
Log file created at: 2025/04/14 16:31:18
Running on machine: ubuntu-20-agent-14
Binary: Built with gc go1.24.0 for linux/amd64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I0414 16:31:18.165165 157245 out.go:345] Setting OutFile to fd 1 ...
I0414 16:31:18.165434 157245 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0414 16:31:18.165444 157245 out.go:358] Setting ErrFile to fd 2...
I0414 16:31:18.165448 157245 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0414 16:31:18.165601 157245 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20349-149500/.minikube/bin
I0414 16:31:18.166170 157245 out.go:352] Setting JSON to false
I0414 16:31:18.167043 157245 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":4376,"bootTime":1744643902,"procs":181,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1078-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
I0414 16:31:18.167148 157245 start.go:139] virtualization: kvm guest
I0414 16:31:18.168712 157245 out.go:177] * [addons-411768] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
I0414 16:31:18.169729 157245 out.go:177] - MINIKUBE_LOCATION=20349
I0414 16:31:18.169739 157245 notify.go:220] Checking for updates...
I0414 16:31:18.171747 157245 out.go:177] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I0414 16:31:18.172941 157245 out.go:177] - KUBECONFIG=/home/jenkins/minikube-integration/20349-149500/kubeconfig
I0414 16:31:18.173968 157245 out.go:177] - MINIKUBE_HOME=/home/jenkins/minikube-integration/20349-149500/.minikube
I0414 16:31:18.175143 157245 out.go:177] - MINIKUBE_BIN=out/minikube-linux-amd64
I0414 16:31:18.176137 157245 out.go:177] - MINIKUBE_FORCE_SYSTEMD=
I0414 16:31:18.177181 157245 driver.go:394] Setting default libvirt URI to qemu:///system
I0414 16:31:18.207857 157245 out.go:177] * Using the kvm2 driver based on user configuration
I0414 16:31:18.208872 157245 start.go:297] selected driver: kvm2
I0414 16:31:18.208881 157245 start.go:901] validating driver "kvm2" against <nil>
I0414 16:31:18.208891 157245 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0414 16:31:18.209581 157245 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0414 16:31:18.209649 157245 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20349-149500/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
I0414 16:31:18.224501 157245 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
I0414 16:31:18.224537 157245 start_flags.go:310] no existing cluster config was found, will generate one from the flags
I0414 16:31:18.224794 157245 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I0414 16:31:18.224834 157245 cni.go:84] Creating CNI manager for ""
I0414 16:31:18.224880 157245 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
I0414 16:31:18.224893 157245 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
I0414 16:31:18.224954 157245 start.go:340] cluster config:
{Name:addons-411768 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:addons-411768 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAg
entPID:0 GPUs: AutoPauseInterval:1m0s}
I0414 16:31:18.225100 157245 iso.go:125] acquiring lock: {Name:mk56ab209abfa01de10f2f82564ecd03de00499a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0414 16:31:18.226589 157245 out.go:177] * Starting "addons-411768" primary control-plane node in "addons-411768" cluster
I0414 16:31:18.227628 157245 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime crio
I0414 16:31:18.227659 157245 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20349-149500/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4
I0414 16:31:18.227671 157245 cache.go:56] Caching tarball of preloaded images
I0414 16:31:18.227745 157245 preload.go:172] Found /home/jenkins/minikube-integration/20349-149500/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
I0414 16:31:18.227756 157245 cache.go:59] Finished verifying existence of preloaded tar for v1.32.2 on crio
I0414 16:31:18.228095 157245 profile.go:143] Saving config to /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/addons-411768/config.json ...
I0414 16:31:18.228123 157245 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/addons-411768/config.json: {Name:mkcb6309a4986d6ced4a41482792d346f9017346 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0414 16:31:18.228247 157245 start.go:360] acquireMachinesLock for addons-411768: {Name:mk6f64d523f60ec1e047c10a4c586315976dcd43 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0414 16:31:18.228309 157245 start.go:364] duration metric: took 47.488µs to acquireMachinesLock for "addons-411768"
I0414 16:31:18.228330 157245 start.go:93] Provisioning new machine with config: &{Name:addons-411768 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterNa
me:addons-411768 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bi
naryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}
I0414 16:31:18.228394 157245 start.go:125] createHost starting for "" (driver="kvm2")
I0414 16:31:18.229747 157245 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
I0414 16:31:18.229893 157245 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0414 16:31:18.229939 157245 main.go:141] libmachine: Launching plugin server for driver kvm2
I0414 16:31:18.243165 157245 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41753
I0414 16:31:18.243642 157245 main.go:141] libmachine: () Calling .GetVersion
I0414 16:31:18.244199 157245 main.go:141] libmachine: Using API Version 1
I0414 16:31:18.244218 157245 main.go:141] libmachine: () Calling .SetConfigRaw
I0414 16:31:18.244552 157245 main.go:141] libmachine: () Calling .GetMachineName
I0414 16:31:18.244706 157245 main.go:141] libmachine: (addons-411768) Calling .GetMachineName
I0414 16:31:18.244843 157245 main.go:141] libmachine: (addons-411768) Calling .DriverName
I0414 16:31:18.244986 157245 start.go:159] libmachine.API.Create for "addons-411768" (driver="kvm2")
I0414 16:31:18.245016 157245 client.go:168] LocalClient.Create starting
I0414 16:31:18.245052 157245 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/20349-149500/.minikube/certs/ca.pem
I0414 16:31:18.541759 157245 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/20349-149500/.minikube/certs/cert.pem
I0414 16:31:18.635842 157245 main.go:141] libmachine: Running pre-create checks...
I0414 16:31:18.635864 157245 main.go:141] libmachine: (addons-411768) Calling .PreCreateCheck
I0414 16:31:18.636332 157245 main.go:141] libmachine: (addons-411768) Calling .GetConfigRaw
I0414 16:31:18.636752 157245 main.go:141] libmachine: Creating machine...
I0414 16:31:18.636768 157245 main.go:141] libmachine: (addons-411768) Calling .Create
I0414 16:31:18.636945 157245 main.go:141] libmachine: (addons-411768) creating KVM machine...
I0414 16:31:18.636964 157245 main.go:141] libmachine: (addons-411768) creating network...
I0414 16:31:18.638220 157245 main.go:141] libmachine: (addons-411768) DBG | found existing default KVM network
I0414 16:31:18.638957 157245 main.go:141] libmachine: (addons-411768) DBG | I0414 16:31:18.638810 157267 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000208dd0}
I0414 16:31:18.639002 157245 main.go:141] libmachine: (addons-411768) DBG | created network xml:
I0414 16:31:18.639032 157245 main.go:141] libmachine: (addons-411768) DBG | <network>
I0414 16:31:18.639042 157245 main.go:141] libmachine: (addons-411768) DBG | <name>mk-addons-411768</name>
I0414 16:31:18.639047 157245 main.go:141] libmachine: (addons-411768) DBG | <dns enable='no'/>
I0414 16:31:18.639052 157245 main.go:141] libmachine: (addons-411768) DBG |
I0414 16:31:18.639057 157245 main.go:141] libmachine: (addons-411768) DBG | <ip address='192.168.39.1' netmask='255.255.255.0'>
I0414 16:31:18.639065 157245 main.go:141] libmachine: (addons-411768) DBG | <dhcp>
I0414 16:31:18.639070 157245 main.go:141] libmachine: (addons-411768) DBG | <range start='192.168.39.2' end='192.168.39.253'/>
I0414 16:31:18.639075 157245 main.go:141] libmachine: (addons-411768) DBG | </dhcp>
I0414 16:31:18.639081 157245 main.go:141] libmachine: (addons-411768) DBG | </ip>
I0414 16:31:18.639092 157245 main.go:141] libmachine: (addons-411768) DBG |
I0414 16:31:18.639103 157245 main.go:141] libmachine: (addons-411768) DBG | </network>
I0414 16:31:18.639128 157245 main.go:141] libmachine: (addons-411768) DBG |
I0414 16:31:18.643909 157245 main.go:141] libmachine: (addons-411768) DBG | trying to create private KVM network mk-addons-411768 192.168.39.0/24...
I0414 16:31:18.706993 157245 main.go:141] libmachine: (addons-411768) DBG | private KVM network mk-addons-411768 192.168.39.0/24 created
I0414 16:31:18.707043 157245 main.go:141] libmachine: (addons-411768) setting up store path in /home/jenkins/minikube-integration/20349-149500/.minikube/machines/addons-411768 ...
I0414 16:31:18.707055 157245 main.go:141] libmachine: (addons-411768) building disk image from file:///home/jenkins/minikube-integration/20349-149500/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso
I0414 16:31:18.707068 157245 main.go:141] libmachine: (addons-411768) DBG | I0414 16:31:18.706969 157267 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/20349-149500/.minikube
I0414 16:31:18.707216 157245 main.go:141] libmachine: (addons-411768) Downloading /home/jenkins/minikube-integration/20349-149500/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20349-149500/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso...
I0414 16:31:18.959419 157245 main.go:141] libmachine: (addons-411768) DBG | I0414 16:31:18.959292 157267 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/20349-149500/.minikube/machines/addons-411768/id_rsa...
I0414 16:31:19.317156 157245 main.go:141] libmachine: (addons-411768) DBG | I0414 16:31:19.316992 157267 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/20349-149500/.minikube/machines/addons-411768/addons-411768.rawdisk...
I0414 16:31:19.317187 157245 main.go:141] libmachine: (addons-411768) DBG | Writing magic tar header
I0414 16:31:19.317201 157245 main.go:141] libmachine: (addons-411768) DBG | Writing SSH key tar header
I0414 16:31:19.317212 157245 main.go:141] libmachine: (addons-411768) DBG | I0414 16:31:19.317160 157267 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/20349-149500/.minikube/machines/addons-411768 ...
I0414 16:31:19.317318 157245 main.go:141] libmachine: (addons-411768) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20349-149500/.minikube/machines/addons-411768
I0414 16:31:19.317339 157245 main.go:141] libmachine: (addons-411768) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20349-149500/.minikube/machines
I0414 16:31:19.317352 157245 main.go:141] libmachine: (addons-411768) setting executable bit set on /home/jenkins/minikube-integration/20349-149500/.minikube/machines/addons-411768 (perms=drwx------)
I0414 16:31:19.317368 157245 main.go:141] libmachine: (addons-411768) setting executable bit set on /home/jenkins/minikube-integration/20349-149500/.minikube/machines (perms=drwxr-xr-x)
I0414 16:31:19.317379 157245 main.go:141] libmachine: (addons-411768) setting executable bit set on /home/jenkins/minikube-integration/20349-149500/.minikube (perms=drwxr-xr-x)
I0414 16:31:19.317390 157245 main.go:141] libmachine: (addons-411768) setting executable bit set on /home/jenkins/minikube-integration/20349-149500 (perms=drwxrwxr-x)
I0414 16:31:19.317403 157245 main.go:141] libmachine: (addons-411768) setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
I0414 16:31:19.317415 157245 main.go:141] libmachine: (addons-411768) setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
I0414 16:31:19.317431 157245 main.go:141] libmachine: (addons-411768) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20349-149500/.minikube
I0414 16:31:19.317440 157245 main.go:141] libmachine: (addons-411768) creating domain...
I0414 16:31:19.317454 157245 main.go:141] libmachine: (addons-411768) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20349-149500
I0414 16:31:19.317465 157245 main.go:141] libmachine: (addons-411768) DBG | checking permissions on dir: /home/jenkins/minikube-integration
I0414 16:31:19.317473 157245 main.go:141] libmachine: (addons-411768) DBG | checking permissions on dir: /home/jenkins
I0414 16:31:19.317483 157245 main.go:141] libmachine: (addons-411768) DBG | checking permissions on dir: /home
I0414 16:31:19.317493 157245 main.go:141] libmachine: (addons-411768) DBG | skipping /home - not owner
I0414 16:31:19.318491 157245 main.go:141] libmachine: (addons-411768) define libvirt domain using xml:
I0414 16:31:19.318516 157245 main.go:141] libmachine: (addons-411768) <domain type='kvm'>
I0414 16:31:19.318528 157245 main.go:141] libmachine: (addons-411768) <name>addons-411768</name>
I0414 16:31:19.318536 157245 main.go:141] libmachine: (addons-411768) <memory unit='MiB'>4000</memory>
I0414 16:31:19.318568 157245 main.go:141] libmachine: (addons-411768) <vcpu>2</vcpu>
I0414 16:31:19.318579 157245 main.go:141] libmachine: (addons-411768) <features>
I0414 16:31:19.318594 157245 main.go:141] libmachine: (addons-411768) <acpi/>
I0414 16:31:19.318607 157245 main.go:141] libmachine: (addons-411768) <apic/>
I0414 16:31:19.318645 157245 main.go:141] libmachine: (addons-411768) <pae/>
I0414 16:31:19.318666 157245 main.go:141] libmachine: (addons-411768)
I0414 16:31:19.318676 157245 main.go:141] libmachine: (addons-411768) </features>
I0414 16:31:19.318689 157245 main.go:141] libmachine: (addons-411768) <cpu mode='host-passthrough'>
I0414 16:31:19.318714 157245 main.go:141] libmachine: (addons-411768)
I0414 16:31:19.318726 157245 main.go:141] libmachine: (addons-411768) </cpu>
I0414 16:31:19.318735 157245 main.go:141] libmachine: (addons-411768) <os>
I0414 16:31:19.318745 157245 main.go:141] libmachine: (addons-411768) <type>hvm</type>
I0414 16:31:19.318755 157245 main.go:141] libmachine: (addons-411768) <boot dev='cdrom'/>
I0414 16:31:19.318768 157245 main.go:141] libmachine: (addons-411768) <boot dev='hd'/>
I0414 16:31:19.318798 157245 main.go:141] libmachine: (addons-411768) <bootmenu enable='no'/>
I0414 16:31:19.318819 157245 main.go:141] libmachine: (addons-411768) </os>
I0414 16:31:19.318828 157245 main.go:141] libmachine: (addons-411768) <devices>
I0414 16:31:19.318851 157245 main.go:141] libmachine: (addons-411768) <disk type='file' device='cdrom'>
I0414 16:31:19.318869 157245 main.go:141] libmachine: (addons-411768) <source file='/home/jenkins/minikube-integration/20349-149500/.minikube/machines/addons-411768/boot2docker.iso'/>
I0414 16:31:19.318877 157245 main.go:141] libmachine: (addons-411768) <target dev='hdc' bus='scsi'/>
I0414 16:31:19.318899 157245 main.go:141] libmachine: (addons-411768) <readonly/>
I0414 16:31:19.318917 157245 main.go:141] libmachine: (addons-411768) </disk>
I0414 16:31:19.318934 157245 main.go:141] libmachine: (addons-411768) <disk type='file' device='disk'>
I0414 16:31:19.318949 157245 main.go:141] libmachine: (addons-411768) <driver name='qemu' type='raw' cache='default' io='threads' />
I0414 16:31:19.318967 157245 main.go:141] libmachine: (addons-411768) <source file='/home/jenkins/minikube-integration/20349-149500/.minikube/machines/addons-411768/addons-411768.rawdisk'/>
I0414 16:31:19.318978 157245 main.go:141] libmachine: (addons-411768) <target dev='hda' bus='virtio'/>
I0414 16:31:19.318990 157245 main.go:141] libmachine: (addons-411768) </disk>
I0414 16:31:19.319005 157245 main.go:141] libmachine: (addons-411768) <interface type='network'>
I0414 16:31:19.319027 157245 main.go:141] libmachine: (addons-411768) <source network='mk-addons-411768'/>
I0414 16:31:19.319038 157245 main.go:141] libmachine: (addons-411768) <model type='virtio'/>
I0414 16:31:19.319048 157245 main.go:141] libmachine: (addons-411768) </interface>
I0414 16:31:19.319058 157245 main.go:141] libmachine: (addons-411768) <interface type='network'>
I0414 16:31:19.319072 157245 main.go:141] libmachine: (addons-411768) <source network='default'/>
I0414 16:31:19.319092 157245 main.go:141] libmachine: (addons-411768) <model type='virtio'/>
I0414 16:31:19.319104 157245 main.go:141] libmachine: (addons-411768) </interface>
I0414 16:31:19.319119 157245 main.go:141] libmachine: (addons-411768) <serial type='pty'>
I0414 16:31:19.319130 157245 main.go:141] libmachine: (addons-411768) <target port='0'/>
I0414 16:31:19.319143 157245 main.go:141] libmachine: (addons-411768) </serial>
I0414 16:31:19.319154 157245 main.go:141] libmachine: (addons-411768) <console type='pty'>
I0414 16:31:19.319163 157245 main.go:141] libmachine: (addons-411768) <target type='serial' port='0'/>
I0414 16:31:19.319174 157245 main.go:141] libmachine: (addons-411768) </console>
I0414 16:31:19.319182 157245 main.go:141] libmachine: (addons-411768) <rng model='virtio'>
I0414 16:31:19.319195 157245 main.go:141] libmachine: (addons-411768) <backend model='random'>/dev/random</backend>
I0414 16:31:19.319213 157245 main.go:141] libmachine: (addons-411768) </rng>
I0414 16:31:19.319222 157245 main.go:141] libmachine: (addons-411768)
I0414 16:31:19.319231 157245 main.go:141] libmachine: (addons-411768)
I0414 16:31:19.319240 157245 main.go:141] libmachine: (addons-411768) </devices>
I0414 16:31:19.319249 157245 main.go:141] libmachine: (addons-411768) </domain>
I0414 16:31:19.319258 157245 main.go:141] libmachine: (addons-411768)
I0414 16:31:19.322828 157245 main.go:141] libmachine: (addons-411768) DBG | domain addons-411768 has defined MAC address 52:54:00:20:b7:3f in network default
I0414 16:31:19.323315 157245 main.go:141] libmachine: (addons-411768) starting domain...
I0414 16:31:19.323337 157245 main.go:141] libmachine: (addons-411768) ensuring networks are active...
I0414 16:31:19.323348 157245 main.go:141] libmachine: (addons-411768) DBG | domain addons-411768 has defined MAC address 52:54:00:81:2d:89 in network mk-addons-411768
I0414 16:31:19.323956 157245 main.go:141] libmachine: (addons-411768) Ensuring network default is active
I0414 16:31:19.324246 157245 main.go:141] libmachine: (addons-411768) Ensuring network mk-addons-411768 is active
I0414 16:31:19.324660 157245 main.go:141] libmachine: (addons-411768) getting domain XML...
I0414 16:31:19.325289 157245 main.go:141] libmachine: (addons-411768) creating domain...
I0414 16:31:20.494301 157245 main.go:141] libmachine: (addons-411768) waiting for IP...
I0414 16:31:20.494965 157245 main.go:141] libmachine: (addons-411768) DBG | domain addons-411768 has defined MAC address 52:54:00:81:2d:89 in network mk-addons-411768
I0414 16:31:20.495284 157245 main.go:141] libmachine: (addons-411768) DBG | unable to find current IP address of domain addons-411768 in network mk-addons-411768
I0414 16:31:20.495349 157245 main.go:141] libmachine: (addons-411768) DBG | I0414 16:31:20.495293 157267 retry.go:31] will retry after 312.32887ms: waiting for domain to come up
I0414 16:31:20.808758 157245 main.go:141] libmachine: (addons-411768) DBG | domain addons-411768 has defined MAC address 52:54:00:81:2d:89 in network mk-addons-411768
I0414 16:31:20.809228 157245 main.go:141] libmachine: (addons-411768) DBG | unable to find current IP address of domain addons-411768 in network mk-addons-411768
I0414 16:31:20.809258 157245 main.go:141] libmachine: (addons-411768) DBG | I0414 16:31:20.809189 157267 retry.go:31] will retry after 247.375577ms: waiting for domain to come up
I0414 16:31:21.058635 157245 main.go:141] libmachine: (addons-411768) DBG | domain addons-411768 has defined MAC address 52:54:00:81:2d:89 in network mk-addons-411768
I0414 16:31:21.059068 157245 main.go:141] libmachine: (addons-411768) DBG | unable to find current IP address of domain addons-411768 in network mk-addons-411768
I0414 16:31:21.059097 157245 main.go:141] libmachine: (addons-411768) DBG | I0414 16:31:21.059024 157267 retry.go:31] will retry after 466.453619ms: waiting for domain to come up
I0414 16:31:21.526557 157245 main.go:141] libmachine: (addons-411768) DBG | domain addons-411768 has defined MAC address 52:54:00:81:2d:89 in network mk-addons-411768
I0414 16:31:21.526927 157245 main.go:141] libmachine: (addons-411768) DBG | unable to find current IP address of domain addons-411768 in network mk-addons-411768
I0414 16:31:21.526948 157245 main.go:141] libmachine: (addons-411768) DBG | I0414 16:31:21.526904 157267 retry.go:31] will retry after 432.389693ms: waiting for domain to come up
I0414 16:31:21.960377 157245 main.go:141] libmachine: (addons-411768) DBG | domain addons-411768 has defined MAC address 52:54:00:81:2d:89 in network mk-addons-411768
I0414 16:31:21.960818 157245 main.go:141] libmachine: (addons-411768) DBG | unable to find current IP address of domain addons-411768 in network mk-addons-411768
I0414 16:31:21.960856 157245 main.go:141] libmachine: (addons-411768) DBG | I0414 16:31:21.960782 157267 retry.go:31] will retry after 547.701184ms: waiting for domain to come up
I0414 16:31:22.511558 157245 main.go:141] libmachine: (addons-411768) DBG | domain addons-411768 has defined MAC address 52:54:00:81:2d:89 in network mk-addons-411768
I0414 16:31:22.512082 157245 main.go:141] libmachine: (addons-411768) DBG | unable to find current IP address of domain addons-411768 in network mk-addons-411768
I0414 16:31:22.512107 157245 main.go:141] libmachine: (addons-411768) DBG | I0414 16:31:22.512048 157267 retry.go:31] will retry after 810.522572ms: waiting for domain to come up
I0414 16:31:23.324254 157245 main.go:141] libmachine: (addons-411768) DBG | domain addons-411768 has defined MAC address 52:54:00:81:2d:89 in network mk-addons-411768
I0414 16:31:23.324660 157245 main.go:141] libmachine: (addons-411768) DBG | unable to find current IP address of domain addons-411768 in network mk-addons-411768
I0414 16:31:23.324703 157245 main.go:141] libmachine: (addons-411768) DBG | I0414 16:31:23.324629 157267 retry.go:31] will retry after 1.103233225s: waiting for domain to come up
I0414 16:31:24.429919 157245 main.go:141] libmachine: (addons-411768) DBG | domain addons-411768 has defined MAC address 52:54:00:81:2d:89 in network mk-addons-411768
I0414 16:31:24.430378 157245 main.go:141] libmachine: (addons-411768) DBG | unable to find current IP address of domain addons-411768 in network mk-addons-411768
I0414 16:31:24.430407 157245 main.go:141] libmachine: (addons-411768) DBG | I0414 16:31:24.430318 157267 retry.go:31] will retry after 1.14528623s: waiting for domain to come up
I0414 16:31:25.577617 157245 main.go:141] libmachine: (addons-411768) DBG | domain addons-411768 has defined MAC address 52:54:00:81:2d:89 in network mk-addons-411768
I0414 16:31:25.577975 157245 main.go:141] libmachine: (addons-411768) DBG | unable to find current IP address of domain addons-411768 in network mk-addons-411768
I0414 16:31:25.578005 157245 main.go:141] libmachine: (addons-411768) DBG | I0414 16:31:25.577945 157267 retry.go:31] will retry after 1.858984681s: waiting for domain to come up
I0414 16:31:27.438914 157245 main.go:141] libmachine: (addons-411768) DBG | domain addons-411768 has defined MAC address 52:54:00:81:2d:89 in network mk-addons-411768
I0414 16:31:27.439378 157245 main.go:141] libmachine: (addons-411768) DBG | unable to find current IP address of domain addons-411768 in network mk-addons-411768
I0414 16:31:27.439419 157245 main.go:141] libmachine: (addons-411768) DBG | I0414 16:31:27.439356 157267 retry.go:31] will retry after 2.310241133s: waiting for domain to come up
I0414 16:31:29.751300 157245 main.go:141] libmachine: (addons-411768) DBG | domain addons-411768 has defined MAC address 52:54:00:81:2d:89 in network mk-addons-411768
I0414 16:31:29.751813 157245 main.go:141] libmachine: (addons-411768) DBG | unable to find current IP address of domain addons-411768 in network mk-addons-411768
I0414 16:31:29.751857 157245 main.go:141] libmachine: (addons-411768) DBG | I0414 16:31:29.751775 157267 retry.go:31] will retry after 2.494754123s: waiting for domain to come up
I0414 16:31:32.249280 157245 main.go:141] libmachine: (addons-411768) DBG | domain addons-411768 has defined MAC address 52:54:00:81:2d:89 in network mk-addons-411768
I0414 16:31:32.249780 157245 main.go:141] libmachine: (addons-411768) DBG | unable to find current IP address of domain addons-411768 in network mk-addons-411768
I0414 16:31:32.249810 157245 main.go:141] libmachine: (addons-411768) DBG | I0414 16:31:32.249761 157267 retry.go:31] will retry after 3.010871662s: waiting for domain to come up
I0414 16:31:35.262847 157245 main.go:141] libmachine: (addons-411768) DBG | domain addons-411768 has defined MAC address 52:54:00:81:2d:89 in network mk-addons-411768
I0414 16:31:35.263313 157245 main.go:141] libmachine: (addons-411768) DBG | unable to find current IP address of domain addons-411768 in network mk-addons-411768
I0414 16:31:35.263358 157245 main.go:141] libmachine: (addons-411768) DBG | I0414 16:31:35.263309 157267 retry.go:31] will retry after 3.112482414s: waiting for domain to come up
I0414 16:31:38.377075 157245 main.go:141] libmachine: (addons-411768) DBG | domain addons-411768 has defined MAC address 52:54:00:81:2d:89 in network mk-addons-411768
I0414 16:31:38.377413 157245 main.go:141] libmachine: (addons-411768) DBG | unable to find current IP address of domain addons-411768 in network mk-addons-411768
I0414 16:31:38.377463 157245 main.go:141] libmachine: (addons-411768) DBG | I0414 16:31:38.377417 157267 retry.go:31] will retry after 3.628902204s: waiting for domain to come up
I0414 16:31:42.010099 157245 main.go:141] libmachine: (addons-411768) DBG | domain addons-411768 has defined MAC address 52:54:00:81:2d:89 in network mk-addons-411768
I0414 16:31:42.010509 157245 main.go:141] libmachine: (addons-411768) DBG | domain addons-411768 has current primary IP address 192.168.39.237 and MAC address 52:54:00:81:2d:89 in network mk-addons-411768
I0414 16:31:42.010533 157245 main.go:141] libmachine: (addons-411768) found domain IP: 192.168.39.237
I0414 16:31:42.010554 157245 main.go:141] libmachine: (addons-411768) reserving static IP address...
I0414 16:31:42.010919 157245 main.go:141] libmachine: (addons-411768) DBG | unable to find host DHCP lease matching {name: "addons-411768", mac: "52:54:00:81:2d:89", ip: "192.168.39.237"} in network mk-addons-411768
I0414 16:31:42.079547 157245 main.go:141] libmachine: (addons-411768) reserved static IP address 192.168.39.237 for domain addons-411768
I0414 16:31:42.079576 157245 main.go:141] libmachine: (addons-411768) DBG | Getting to WaitForSSH function...
I0414 16:31:42.079584 157245 main.go:141] libmachine: (addons-411768) waiting for SSH...
I0414 16:31:42.081948 157245 main.go:141] libmachine: (addons-411768) DBG | domain addons-411768 has defined MAC address 52:54:00:81:2d:89 in network mk-addons-411768
I0414 16:31:42.082267 157245 main.go:141] libmachine: (addons-411768) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:81:2d:89", ip: ""} in network mk-addons-411768
I0414 16:31:42.082295 157245 main.go:141] libmachine: (addons-411768) DBG | unable to find defined IP address of network mk-addons-411768 interface with MAC address 52:54:00:81:2d:89
I0414 16:31:42.082430 157245 main.go:141] libmachine: (addons-411768) DBG | Using SSH client type: external
I0414 16:31:42.082459 157245 main.go:141] libmachine: (addons-411768) DBG | Using SSH private key: /home/jenkins/minikube-integration/20349-149500/.minikube/machines/addons-411768/id_rsa (-rw-------)
I0414 16:31:42.082498 157245 main.go:141] libmachine: (addons-411768) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20349-149500/.minikube/machines/addons-411768/id_rsa -p 22] /usr/bin/ssh <nil>}
I0414 16:31:42.082509 157245 main.go:141] libmachine: (addons-411768) DBG | About to run SSH command:
I0414 16:31:42.082521 157245 main.go:141] libmachine: (addons-411768) DBG | exit 0
I0414 16:31:42.086060 157245 main.go:141] libmachine: (addons-411768) DBG | SSH cmd err, output: exit status 255:
I0414 16:31:42.086087 157245 main.go:141] libmachine: (addons-411768) DBG | Error getting ssh command 'exit 0' : ssh command error:
I0414 16:31:42.086094 157245 main.go:141] libmachine: (addons-411768) DBG | command : exit 0
I0414 16:31:42.086100 157245 main.go:141] libmachine: (addons-411768) DBG | err : exit status 255
I0414 16:31:42.086108 157245 main.go:141] libmachine: (addons-411768) DBG | output :
I0414 16:31:45.087759 157245 main.go:141] libmachine: (addons-411768) DBG | Getting to WaitForSSH function...
I0414 16:31:45.090263 157245 main.go:141] libmachine: (addons-411768) DBG | domain addons-411768 has defined MAC address 52:54:00:81:2d:89 in network mk-addons-411768
I0414 16:31:45.090581 157245 main.go:141] libmachine: (addons-411768) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:2d:89", ip: ""} in network mk-addons-411768: {Iface:virbr1 ExpiryTime:2025-04-14 17:31:33 +0000 UTC Type:0 Mac:52:54:00:81:2d:89 Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:addons-411768 Clientid:01:52:54:00:81:2d:89}
I0414 16:31:45.090601 157245 main.go:141] libmachine: (addons-411768) DBG | domain addons-411768 has defined IP address 192.168.39.237 and MAC address 52:54:00:81:2d:89 in network mk-addons-411768
I0414 16:31:45.090752 157245 main.go:141] libmachine: (addons-411768) DBG | Using SSH client type: external
I0414 16:31:45.090775 157245 main.go:141] libmachine: (addons-411768) DBG | Using SSH private key: /home/jenkins/minikube-integration/20349-149500/.minikube/machines/addons-411768/id_rsa (-rw-------)
I0414 16:31:45.090806 157245 main.go:141] libmachine: (addons-411768) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.237 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20349-149500/.minikube/machines/addons-411768/id_rsa -p 22] /usr/bin/ssh <nil>}
I0414 16:31:45.090821 157245 main.go:141] libmachine: (addons-411768) DBG | About to run SSH command:
I0414 16:31:45.090848 157245 main.go:141] libmachine: (addons-411768) DBG | exit 0
I0414 16:31:45.213395 157245 main.go:141] libmachine: (addons-411768) DBG | SSH cmd err, output: <nil>:
I0414 16:31:45.213700 157245 main.go:141] libmachine: (addons-411768) KVM machine creation complete
I0414 16:31:45.213978 157245 main.go:141] libmachine: (addons-411768) Calling .GetConfigRaw
I0414 16:31:45.214532 157245 main.go:141] libmachine: (addons-411768) Calling .DriverName
I0414 16:31:45.214711 157245 main.go:141] libmachine: (addons-411768) Calling .DriverName
I0414 16:31:45.214826 157245 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
I0414 16:31:45.214840 157245 main.go:141] libmachine: (addons-411768) Calling .GetState
I0414 16:31:45.216023 157245 main.go:141] libmachine: Detecting operating system of created instance...
I0414 16:31:45.216034 157245 main.go:141] libmachine: Waiting for SSH to be available...
I0414 16:31:45.216039 157245 main.go:141] libmachine: Getting to WaitForSSH function...
I0414 16:31:45.216044 157245 main.go:141] libmachine: (addons-411768) Calling .GetSSHHostname
I0414 16:31:45.218121 157245 main.go:141] libmachine: (addons-411768) DBG | domain addons-411768 has defined MAC address 52:54:00:81:2d:89 in network mk-addons-411768
I0414 16:31:45.218473 157245 main.go:141] libmachine: (addons-411768) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:2d:89", ip: ""} in network mk-addons-411768: {Iface:virbr1 ExpiryTime:2025-04-14 17:31:33 +0000 UTC Type:0 Mac:52:54:00:81:2d:89 Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:addons-411768 Clientid:01:52:54:00:81:2d:89}
I0414 16:31:45.218491 157245 main.go:141] libmachine: (addons-411768) DBG | domain addons-411768 has defined IP address 192.168.39.237 and MAC address 52:54:00:81:2d:89 in network mk-addons-411768
I0414 16:31:45.218615 157245 main.go:141] libmachine: (addons-411768) Calling .GetSSHPort
I0414 16:31:45.218773 157245 main.go:141] libmachine: (addons-411768) Calling .GetSSHKeyPath
I0414 16:31:45.218930 157245 main.go:141] libmachine: (addons-411768) Calling .GetSSHKeyPath
I0414 16:31:45.219021 157245 main.go:141] libmachine: (addons-411768) Calling .GetSSHUsername
I0414 16:31:45.219166 157245 main.go:141] libmachine: Using SSH client type: native
I0414 16:31:45.219368 157245 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil> [] 0s} 192.168.39.237 22 <nil> <nil>}
I0414 16:31:45.219377 157245 main.go:141] libmachine: About to run SSH command:
exit 0
I0414 16:31:45.324619 157245 main.go:141] libmachine: SSH cmd err, output: <nil>:
I0414 16:31:45.324638 157245 main.go:141] libmachine: Detecting the provisioner...
I0414 16:31:45.324644 157245 main.go:141] libmachine: (addons-411768) Calling .GetSSHHostname
I0414 16:31:45.327115 157245 main.go:141] libmachine: (addons-411768) DBG | domain addons-411768 has defined MAC address 52:54:00:81:2d:89 in network mk-addons-411768
I0414 16:31:45.327424 157245 main.go:141] libmachine: (addons-411768) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:2d:89", ip: ""} in network mk-addons-411768: {Iface:virbr1 ExpiryTime:2025-04-14 17:31:33 +0000 UTC Type:0 Mac:52:54:00:81:2d:89 Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:addons-411768 Clientid:01:52:54:00:81:2d:89}
I0414 16:31:45.327453 157245 main.go:141] libmachine: (addons-411768) DBG | domain addons-411768 has defined IP address 192.168.39.237 and MAC address 52:54:00:81:2d:89 in network mk-addons-411768
I0414 16:31:45.327610 157245 main.go:141] libmachine: (addons-411768) Calling .GetSSHPort
I0414 16:31:45.327778 157245 main.go:141] libmachine: (addons-411768) Calling .GetSSHKeyPath
I0414 16:31:45.327895 157245 main.go:141] libmachine: (addons-411768) Calling .GetSSHKeyPath
I0414 16:31:45.328012 157245 main.go:141] libmachine: (addons-411768) Calling .GetSSHUsername
I0414 16:31:45.328154 157245 main.go:141] libmachine: Using SSH client type: native
I0414 16:31:45.328426 157245 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil> [] 0s} 192.168.39.237 22 <nil> <nil>}
I0414 16:31:45.328439 157245 main.go:141] libmachine: About to run SSH command:
cat /etc/os-release
I0414 16:31:45.429915 157245 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
VERSION=2023.02.9-dirty
ID=buildroot
VERSION_ID=2023.02.9
PRETTY_NAME="Buildroot 2023.02.9"
I0414 16:31:45.429997 157245 main.go:141] libmachine: found compatible host: buildroot
I0414 16:31:45.430012 157245 main.go:141] libmachine: Provisioning with buildroot...
I0414 16:31:45.430023 157245 main.go:141] libmachine: (addons-411768) Calling .GetMachineName
I0414 16:31:45.430319 157245 buildroot.go:166] provisioning hostname "addons-411768"
I0414 16:31:45.430342 157245 main.go:141] libmachine: (addons-411768) Calling .GetMachineName
I0414 16:31:45.430498 157245 main.go:141] libmachine: (addons-411768) Calling .GetSSHHostname
I0414 16:31:45.432908 157245 main.go:141] libmachine: (addons-411768) DBG | domain addons-411768 has defined MAC address 52:54:00:81:2d:89 in network mk-addons-411768
I0414 16:31:45.433231 157245 main.go:141] libmachine: (addons-411768) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:2d:89", ip: ""} in network mk-addons-411768: {Iface:virbr1 ExpiryTime:2025-04-14 17:31:33 +0000 UTC Type:0 Mac:52:54:00:81:2d:89 Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:addons-411768 Clientid:01:52:54:00:81:2d:89}
I0414 16:31:45.433251 157245 main.go:141] libmachine: (addons-411768) DBG | domain addons-411768 has defined IP address 192.168.39.237 and MAC address 52:54:00:81:2d:89 in network mk-addons-411768
I0414 16:31:45.433377 157245 main.go:141] libmachine: (addons-411768) Calling .GetSSHPort
I0414 16:31:45.433534 157245 main.go:141] libmachine: (addons-411768) Calling .GetSSHKeyPath
I0414 16:31:45.433683 157245 main.go:141] libmachine: (addons-411768) Calling .GetSSHKeyPath
I0414 16:31:45.433799 157245 main.go:141] libmachine: (addons-411768) Calling .GetSSHUsername
I0414 16:31:45.433957 157245 main.go:141] libmachine: Using SSH client type: native
I0414 16:31:45.434153 157245 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil> [] 0s} 192.168.39.237 22 <nil> <nil>}
I0414 16:31:45.434165 157245 main.go:141] libmachine: About to run SSH command:
sudo hostname addons-411768 && echo "addons-411768" | sudo tee /etc/hostname
I0414 16:31:45.546961 157245 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-411768
I0414 16:31:45.546989 157245 main.go:141] libmachine: (addons-411768) Calling .GetSSHHostname
I0414 16:31:45.549613 157245 main.go:141] libmachine: (addons-411768) DBG | domain addons-411768 has defined MAC address 52:54:00:81:2d:89 in network mk-addons-411768
I0414 16:31:45.549979 157245 main.go:141] libmachine: (addons-411768) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:2d:89", ip: ""} in network mk-addons-411768: {Iface:virbr1 ExpiryTime:2025-04-14 17:31:33 +0000 UTC Type:0 Mac:52:54:00:81:2d:89 Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:addons-411768 Clientid:01:52:54:00:81:2d:89}
I0414 16:31:45.550005 157245 main.go:141] libmachine: (addons-411768) DBG | domain addons-411768 has defined IP address 192.168.39.237 and MAC address 52:54:00:81:2d:89 in network mk-addons-411768
I0414 16:31:45.550211 157245 main.go:141] libmachine: (addons-411768) Calling .GetSSHPort
I0414 16:31:45.550400 157245 main.go:141] libmachine: (addons-411768) Calling .GetSSHKeyPath
I0414 16:31:45.550549 157245 main.go:141] libmachine: (addons-411768) Calling .GetSSHKeyPath
I0414 16:31:45.550691 157245 main.go:141] libmachine: (addons-411768) Calling .GetSSHUsername
I0414 16:31:45.550963 157245 main.go:141] libmachine: Using SSH client type: native
I0414 16:31:45.551212 157245 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil> [] 0s} 192.168.39.237 22 <nil> <nil>}
I0414 16:31:45.551239 157245 main.go:141] libmachine: About to run SSH command:
if ! grep -xq '.*\saddons-411768' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-411768/g' /etc/hosts;
else
echo '127.0.1.1 addons-411768' | sudo tee -a /etc/hosts;
fi
fi
I0414 16:31:45.657525 157245 main.go:141] libmachine: SSH cmd err, output: <nil>:
I0414 16:31:45.657555 157245 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20349-149500/.minikube CaCertPath:/home/jenkins/minikube-integration/20349-149500/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20349-149500/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20349-149500/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20349-149500/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20349-149500/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20349-149500/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20349-149500/.minikube}
I0414 16:31:45.657577 157245 buildroot.go:174] setting up certificates
I0414 16:31:45.657589 157245 provision.go:84] configureAuth start
I0414 16:31:45.657603 157245 main.go:141] libmachine: (addons-411768) Calling .GetMachineName
I0414 16:31:45.657842 157245 main.go:141] libmachine: (addons-411768) Calling .GetIP
I0414 16:31:45.660375 157245 main.go:141] libmachine: (addons-411768) DBG | domain addons-411768 has defined MAC address 52:54:00:81:2d:89 in network mk-addons-411768
I0414 16:31:45.660769 157245 main.go:141] libmachine: (addons-411768) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:2d:89", ip: ""} in network mk-addons-411768: {Iface:virbr1 ExpiryTime:2025-04-14 17:31:33 +0000 UTC Type:0 Mac:52:54:00:81:2d:89 Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:addons-411768 Clientid:01:52:54:00:81:2d:89}
I0414 16:31:45.660803 157245 main.go:141] libmachine: (addons-411768) DBG | domain addons-411768 has defined IP address 192.168.39.237 and MAC address 52:54:00:81:2d:89 in network mk-addons-411768
I0414 16:31:45.660944 157245 main.go:141] libmachine: (addons-411768) Calling .GetSSHHostname
I0414 16:31:45.663138 157245 main.go:141] libmachine: (addons-411768) DBG | domain addons-411768 has defined MAC address 52:54:00:81:2d:89 in network mk-addons-411768
I0414 16:31:45.663489 157245 main.go:141] libmachine: (addons-411768) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:2d:89", ip: ""} in network mk-addons-411768: {Iface:virbr1 ExpiryTime:2025-04-14 17:31:33 +0000 UTC Type:0 Mac:52:54:00:81:2d:89 Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:addons-411768 Clientid:01:52:54:00:81:2d:89}
I0414 16:31:45.663522 157245 main.go:141] libmachine: (addons-411768) DBG | domain addons-411768 has defined IP address 192.168.39.237 and MAC address 52:54:00:81:2d:89 in network mk-addons-411768
I0414 16:31:45.663601 157245 provision.go:143] copyHostCerts
I0414 16:31:45.663666 157245 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20349-149500/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20349-149500/.minikube/ca.pem (1082 bytes)
I0414 16:31:45.663831 157245 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20349-149500/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20349-149500/.minikube/cert.pem (1123 bytes)
I0414 16:31:45.663909 157245 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20349-149500/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20349-149500/.minikube/key.pem (1675 bytes)
I0414 16:31:45.663967 157245 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20349-149500/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20349-149500/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20349-149500/.minikube/certs/ca-key.pem org=jenkins.addons-411768 san=[127.0.0.1 192.168.39.237 addons-411768 localhost minikube]
I0414 16:31:45.795299 157245 provision.go:177] copyRemoteCerts
I0414 16:31:45.795360 157245 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I0414 16:31:45.795383 157245 main.go:141] libmachine: (addons-411768) Calling .GetSSHHostname
I0414 16:31:45.797898 157245 main.go:141] libmachine: (addons-411768) DBG | domain addons-411768 has defined MAC address 52:54:00:81:2d:89 in network mk-addons-411768
I0414 16:31:45.798191 157245 main.go:141] libmachine: (addons-411768) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:2d:89", ip: ""} in network mk-addons-411768: {Iface:virbr1 ExpiryTime:2025-04-14 17:31:33 +0000 UTC Type:0 Mac:52:54:00:81:2d:89 Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:addons-411768 Clientid:01:52:54:00:81:2d:89}
I0414 16:31:45.798220 157245 main.go:141] libmachine: (addons-411768) DBG | domain addons-411768 has defined IP address 192.168.39.237 and MAC address 52:54:00:81:2d:89 in network mk-addons-411768
I0414 16:31:45.798360 157245 main.go:141] libmachine: (addons-411768) Calling .GetSSHPort
I0414 16:31:45.798539 157245 main.go:141] libmachine: (addons-411768) Calling .GetSSHKeyPath
I0414 16:31:45.798662 157245 main.go:141] libmachine: (addons-411768) Calling .GetSSHUsername
I0414 16:31:45.798789 157245 sshutil.go:53] new ssh client: &{IP:192.168.39.237 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20349-149500/.minikube/machines/addons-411768/id_rsa Username:docker}
I0414 16:31:45.879801 157245 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20349-149500/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
I0414 16:31:45.902404 157245 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20349-149500/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
I0414 16:31:45.925084 157245 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20349-149500/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
I0414 16:31:45.949044 157245 provision.go:87] duration metric: took 291.438071ms to configureAuth
I0414 16:31:45.949068 157245 buildroot.go:189] setting minikube options for container-runtime
I0414 16:31:45.949221 157245 config.go:182] Loaded profile config "addons-411768": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
I0414 16:31:45.949309 157245 main.go:141] libmachine: (addons-411768) Calling .GetSSHHostname
I0414 16:31:45.951682 157245 main.go:141] libmachine: (addons-411768) DBG | domain addons-411768 has defined MAC address 52:54:00:81:2d:89 in network mk-addons-411768
I0414 16:31:45.951981 157245 main.go:141] libmachine: (addons-411768) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:2d:89", ip: ""} in network mk-addons-411768: {Iface:virbr1 ExpiryTime:2025-04-14 17:31:33 +0000 UTC Type:0 Mac:52:54:00:81:2d:89 Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:addons-411768 Clientid:01:52:54:00:81:2d:89}
I0414 16:31:45.952011 157245 main.go:141] libmachine: (addons-411768) DBG | domain addons-411768 has defined IP address 192.168.39.237 and MAC address 52:54:00:81:2d:89 in network mk-addons-411768
I0414 16:31:45.952166 157245 main.go:141] libmachine: (addons-411768) Calling .GetSSHPort
I0414 16:31:45.952333 157245 main.go:141] libmachine: (addons-411768) Calling .GetSSHKeyPath
I0414 16:31:45.952467 157245 main.go:141] libmachine: (addons-411768) Calling .GetSSHKeyPath
I0414 16:31:45.952638 157245 main.go:141] libmachine: (addons-411768) Calling .GetSSHUsername
I0414 16:31:45.952795 157245 main.go:141] libmachine: Using SSH client type: native
I0414 16:31:45.953094 157245 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil> [] 0s} 192.168.39.237 22 <nil> <nil>}
I0414 16:31:45.953120 157245 main.go:141] libmachine: About to run SSH command:
sudo mkdir -p /etc/sysconfig && printf %s "
CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
I0414 16:31:46.166457 157245 main.go:141] libmachine: SSH cmd err, output: <nil>:
CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
I0414 16:31:46.166488 157245 main.go:141] libmachine: Checking connection to Docker...
I0414 16:31:46.166495 157245 main.go:141] libmachine: (addons-411768) Calling .GetURL
I0414 16:31:46.167769 157245 main.go:141] libmachine: (addons-411768) DBG | using libvirt version 6000000
I0414 16:31:46.170175 157245 main.go:141] libmachine: (addons-411768) DBG | domain addons-411768 has defined MAC address 52:54:00:81:2d:89 in network mk-addons-411768
I0414 16:31:46.170508 157245 main.go:141] libmachine: (addons-411768) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:2d:89", ip: ""} in network mk-addons-411768: {Iface:virbr1 ExpiryTime:2025-04-14 17:31:33 +0000 UTC Type:0 Mac:52:54:00:81:2d:89 Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:addons-411768 Clientid:01:52:54:00:81:2d:89}
I0414 16:31:46.170536 157245 main.go:141] libmachine: (addons-411768) DBG | domain addons-411768 has defined IP address 192.168.39.237 and MAC address 52:54:00:81:2d:89 in network mk-addons-411768
I0414 16:31:46.170657 157245 main.go:141] libmachine: Docker is up and running!
I0414 16:31:46.170673 157245 main.go:141] libmachine: Reticulating splines...
I0414 16:31:46.170682 157245 client.go:171] duration metric: took 27.925654468s to LocalClient.Create
I0414 16:31:46.170701 157245 start.go:167] duration metric: took 27.925716628s to libmachine.API.Create "addons-411768"
I0414 16:31:46.170712 157245 start.go:293] postStartSetup for "addons-411768" (driver="kvm2")
I0414 16:31:46.170722 157245 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0414 16:31:46.170739 157245 main.go:141] libmachine: (addons-411768) Calling .DriverName
I0414 16:31:46.170954 157245 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0414 16:31:46.170978 157245 main.go:141] libmachine: (addons-411768) Calling .GetSSHHostname
I0414 16:31:46.172841 157245 main.go:141] libmachine: (addons-411768) DBG | domain addons-411768 has defined MAC address 52:54:00:81:2d:89 in network mk-addons-411768
I0414 16:31:46.173138 157245 main.go:141] libmachine: (addons-411768) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:2d:89", ip: ""} in network mk-addons-411768: {Iface:virbr1 ExpiryTime:2025-04-14 17:31:33 +0000 UTC Type:0 Mac:52:54:00:81:2d:89 Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:addons-411768 Clientid:01:52:54:00:81:2d:89}
I0414 16:31:46.173167 157245 main.go:141] libmachine: (addons-411768) DBG | domain addons-411768 has defined IP address 192.168.39.237 and MAC address 52:54:00:81:2d:89 in network mk-addons-411768
I0414 16:31:46.173288 157245 main.go:141] libmachine: (addons-411768) Calling .GetSSHPort
I0414 16:31:46.173451 157245 main.go:141] libmachine: (addons-411768) Calling .GetSSHKeyPath
I0414 16:31:46.173601 157245 main.go:141] libmachine: (addons-411768) Calling .GetSSHUsername
I0414 16:31:46.173740 157245 sshutil.go:53] new ssh client: &{IP:192.168.39.237 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20349-149500/.minikube/machines/addons-411768/id_rsa Username:docker}
I0414 16:31:46.257790 157245 ssh_runner.go:195] Run: cat /etc/os-release
I0414 16:31:46.261866 157245 info.go:137] Remote host: Buildroot 2023.02.9
I0414 16:31:46.261882 157245 filesync.go:126] Scanning /home/jenkins/minikube-integration/20349-149500/.minikube/addons for local assets ...
I0414 16:31:46.261943 157245 filesync.go:126] Scanning /home/jenkins/minikube-integration/20349-149500/.minikube/files for local assets ...
I0414 16:31:46.261966 157245 start.go:296] duration metric: took 91.249009ms for postStartSetup
I0414 16:31:46.262012 157245 main.go:141] libmachine: (addons-411768) Calling .GetConfigRaw
I0414 16:31:46.262500 157245 main.go:141] libmachine: (addons-411768) Calling .GetIP
I0414 16:31:46.264919 157245 main.go:141] libmachine: (addons-411768) DBG | domain addons-411768 has defined MAC address 52:54:00:81:2d:89 in network mk-addons-411768
I0414 16:31:46.265236 157245 main.go:141] libmachine: (addons-411768) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:2d:89", ip: ""} in network mk-addons-411768: {Iface:virbr1 ExpiryTime:2025-04-14 17:31:33 +0000 UTC Type:0 Mac:52:54:00:81:2d:89 Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:addons-411768 Clientid:01:52:54:00:81:2d:89}
I0414 16:31:46.265265 157245 main.go:141] libmachine: (addons-411768) DBG | domain addons-411768 has defined IP address 192.168.39.237 and MAC address 52:54:00:81:2d:89 in network mk-addons-411768
I0414 16:31:46.265442 157245 profile.go:143] Saving config to /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/addons-411768/config.json ...
I0414 16:31:46.265640 157245 start.go:128] duration metric: took 28.037232893s to createHost
I0414 16:31:46.265670 157245 main.go:141] libmachine: (addons-411768) Calling .GetSSHHostname
I0414 16:31:46.267566 157245 main.go:141] libmachine: (addons-411768) DBG | domain addons-411768 has defined MAC address 52:54:00:81:2d:89 in network mk-addons-411768
I0414 16:31:46.267838 157245 main.go:141] libmachine: (addons-411768) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:2d:89", ip: ""} in network mk-addons-411768: {Iface:virbr1 ExpiryTime:2025-04-14 17:31:33 +0000 UTC Type:0 Mac:52:54:00:81:2d:89 Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:addons-411768 Clientid:01:52:54:00:81:2d:89}
I0414 16:31:46.267867 157245 main.go:141] libmachine: (addons-411768) DBG | domain addons-411768 has defined IP address 192.168.39.237 and MAC address 52:54:00:81:2d:89 in network mk-addons-411768
I0414 16:31:46.268045 157245 main.go:141] libmachine: (addons-411768) Calling .GetSSHPort
I0414 16:31:46.268217 157245 main.go:141] libmachine: (addons-411768) Calling .GetSSHKeyPath
I0414 16:31:46.268328 157245 main.go:141] libmachine: (addons-411768) Calling .GetSSHKeyPath
I0414 16:31:46.268470 157245 main.go:141] libmachine: (addons-411768) Calling .GetSSHUsername
I0414 16:31:46.268610 157245 main.go:141] libmachine: Using SSH client type: native
I0414 16:31:46.268804 157245 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x836360] 0x839060 <nil> [] 0s} 192.168.39.237 22 <nil> <nil>}
I0414 16:31:46.268814 157245 main.go:141] libmachine: About to run SSH command:
date +%s.%N
I0414 16:31:46.370629 157245 main.go:141] libmachine: SSH cmd err, output: <nil>: 1744648306.342390252
I0414 16:31:46.370658 157245 fix.go:216] guest clock: 1744648306.342390252
I0414 16:31:46.370667 157245 fix.go:229] Guest: 2025-04-14 16:31:46.342390252 +0000 UTC Remote: 2025-04-14 16:31:46.2656559 +0000 UTC m=+28.135423600 (delta=76.734352ms)
I0414 16:31:46.370702 157245 fix.go:200] guest clock delta is within tolerance: 76.734352ms
I0414 16:31:46.370708 157245 start.go:83] releasing machines lock for "addons-411768", held for 28.142389503s
I0414 16:31:46.370730 157245 main.go:141] libmachine: (addons-411768) Calling .DriverName
I0414 16:31:46.370983 157245 main.go:141] libmachine: (addons-411768) Calling .GetIP
I0414 16:31:46.373400 157245 main.go:141] libmachine: (addons-411768) DBG | domain addons-411768 has defined MAC address 52:54:00:81:2d:89 in network mk-addons-411768
I0414 16:31:46.373735 157245 main.go:141] libmachine: (addons-411768) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:2d:89", ip: ""} in network mk-addons-411768: {Iface:virbr1 ExpiryTime:2025-04-14 17:31:33 +0000 UTC Type:0 Mac:52:54:00:81:2d:89 Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:addons-411768 Clientid:01:52:54:00:81:2d:89}
I0414 16:31:46.373764 157245 main.go:141] libmachine: (addons-411768) DBG | domain addons-411768 has defined IP address 192.168.39.237 and MAC address 52:54:00:81:2d:89 in network mk-addons-411768
I0414 16:31:46.373943 157245 main.go:141] libmachine: (addons-411768) Calling .DriverName
I0414 16:31:46.374447 157245 main.go:141] libmachine: (addons-411768) Calling .DriverName
I0414 16:31:46.374620 157245 main.go:141] libmachine: (addons-411768) Calling .DriverName
I0414 16:31:46.374730 157245 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
I0414 16:31:46.374787 157245 main.go:141] libmachine: (addons-411768) Calling .GetSSHHostname
I0414 16:31:46.374822 157245 ssh_runner.go:195] Run: cat /version.json
I0414 16:31:46.374847 157245 main.go:141] libmachine: (addons-411768) Calling .GetSSHHostname
I0414 16:31:46.377238 157245 main.go:141] libmachine: (addons-411768) DBG | domain addons-411768 has defined MAC address 52:54:00:81:2d:89 in network mk-addons-411768
I0414 16:31:46.377425 157245 main.go:141] libmachine: (addons-411768) DBG | domain addons-411768 has defined MAC address 52:54:00:81:2d:89 in network mk-addons-411768
I0414 16:31:46.377601 157245 main.go:141] libmachine: (addons-411768) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:2d:89", ip: ""} in network mk-addons-411768: {Iface:virbr1 ExpiryTime:2025-04-14 17:31:33 +0000 UTC Type:0 Mac:52:54:00:81:2d:89 Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:addons-411768 Clientid:01:52:54:00:81:2d:89}
I0414 16:31:46.377630 157245 main.go:141] libmachine: (addons-411768) DBG | domain addons-411768 has defined IP address 192.168.39.237 and MAC address 52:54:00:81:2d:89 in network mk-addons-411768
I0414 16:31:46.377755 157245 main.go:141] libmachine: (addons-411768) Calling .GetSSHPort
I0414 16:31:46.377823 157245 main.go:141] libmachine: (addons-411768) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:2d:89", ip: ""} in network mk-addons-411768: {Iface:virbr1 ExpiryTime:2025-04-14 17:31:33 +0000 UTC Type:0 Mac:52:54:00:81:2d:89 Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:addons-411768 Clientid:01:52:54:00:81:2d:89}
I0414 16:31:46.377875 157245 main.go:141] libmachine: (addons-411768) DBG | domain addons-411768 has defined IP address 192.168.39.237 and MAC address 52:54:00:81:2d:89 in network mk-addons-411768
I0414 16:31:46.377930 157245 main.go:141] libmachine: (addons-411768) Calling .GetSSHKeyPath
I0414 16:31:46.378023 157245 main.go:141] libmachine: (addons-411768) Calling .GetSSHPort
I0414 16:31:46.378104 157245 main.go:141] libmachine: (addons-411768) Calling .GetSSHUsername
I0414 16:31:46.378172 157245 main.go:141] libmachine: (addons-411768) Calling .GetSSHKeyPath
I0414 16:31:46.378227 157245 sshutil.go:53] new ssh client: &{IP:192.168.39.237 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20349-149500/.minikube/machines/addons-411768/id_rsa Username:docker}
I0414 16:31:46.378328 157245 main.go:141] libmachine: (addons-411768) Calling .GetSSHUsername
I0414 16:31:46.378462 157245 sshutil.go:53] new ssh client: &{IP:192.168.39.237 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20349-149500/.minikube/machines/addons-411768/id_rsa Username:docker}
I0414 16:31:46.480700 157245 ssh_runner.go:195] Run: systemctl --version
I0414 16:31:46.486242 157245 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
I0414 16:31:46.645308 157245 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
W0414 16:31:46.651524 157245 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
I0414 16:31:46.651581 157245 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
I0414 16:31:46.669868 157245 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
I0414 16:31:46.669896 157245 start.go:495] detecting cgroup driver to use...
I0414 16:31:46.669992 157245 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
I0414 16:31:46.685692 157245 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
I0414 16:31:46.700090 157245 docker.go:217] disabling cri-docker service (if available) ...
I0414 16:31:46.700150 157245 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
I0414 16:31:46.713710 157245 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
I0414 16:31:46.726966 157245 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
I0414 16:31:46.838767 157245 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
I0414 16:31:46.979989 157245 docker.go:233] disabling docker service ...
I0414 16:31:46.980067 157245 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
I0414 16:31:47.003477 157245 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
I0414 16:31:47.016069 157245 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
I0414 16:31:47.150424 157245 ssh_runner.go:195] Run: sudo systemctl mask docker.service
I0414 16:31:47.287057 157245 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
I0414 16:31:47.300548 157245 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
" | sudo tee /etc/crictl.yaml"
I0414 16:31:47.318834 157245 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
I0414 16:31:47.318902 157245 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
I0414 16:31:47.329716 157245 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
I0414 16:31:47.329783 157245 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
I0414 16:31:47.340624 157245 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
I0414 16:31:47.351334 157245 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
I0414 16:31:47.362121 157245 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I0414 16:31:47.373147 157245 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
I0414 16:31:47.383994 157245 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
I0414 16:31:47.401225 157245 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
I0414 16:31:47.411919 157245 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I0414 16:31:47.421523 157245 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
stdout:
stderr:
sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
I0414 16:31:47.421583 157245 ssh_runner.go:195] Run: sudo modprobe br_netfilter
I0414 16:31:47.434300 157245 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I0414 16:31:47.444176 157245 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0414 16:31:47.564481 157245 ssh_runner.go:195] Run: sudo systemctl restart crio
I0414 16:31:47.663689 157245 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
I0414 16:31:47.663772 157245 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
I0414 16:31:47.668684 157245 start.go:563] Will wait 60s for crictl version
I0414 16:31:47.668749 157245 ssh_runner.go:195] Run: which crictl
I0414 16:31:47.672296 157245 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
I0414 16:31:47.711027 157245 start.go:579] Version: 0.1.0
RuntimeName: cri-o
RuntimeVersion: 1.29.1
RuntimeApiVersion: v1
I0414 16:31:47.711150 157245 ssh_runner.go:195] Run: crio --version
I0414 16:31:47.738177 157245 ssh_runner.go:195] Run: crio --version
I0414 16:31:47.765363 157245 out.go:177] * Preparing Kubernetes v1.32.2 on CRI-O 1.29.1 ...
I0414 16:31:47.766624 157245 main.go:141] libmachine: (addons-411768) Calling .GetIP
I0414 16:31:47.769252 157245 main.go:141] libmachine: (addons-411768) DBG | domain addons-411768 has defined MAC address 52:54:00:81:2d:89 in network mk-addons-411768
I0414 16:31:47.769571 157245 main.go:141] libmachine: (addons-411768) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:2d:89", ip: ""} in network mk-addons-411768: {Iface:virbr1 ExpiryTime:2025-04-14 17:31:33 +0000 UTC Type:0 Mac:52:54:00:81:2d:89 Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:addons-411768 Clientid:01:52:54:00:81:2d:89}
I0414 16:31:47.769595 157245 main.go:141] libmachine: (addons-411768) DBG | domain addons-411768 has defined IP address 192.168.39.237 and MAC address 52:54:00:81:2d:89 in network mk-addons-411768
I0414 16:31:47.769815 157245 ssh_runner.go:195] Run: grep 192.168.39.1 host.minikube.internal$ /etc/hosts
I0414 16:31:47.773499 157245 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0414 16:31:47.785608 157245 kubeadm.go:883] updating cluster {Name:addons-411768 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:addons-411768 Na
mespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.237 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryM
irror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
I0414 16:31:47.785697 157245 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime crio
I0414 16:31:47.785734 157245 ssh_runner.go:195] Run: sudo crictl images --output json
I0414 16:31:47.817308 157245 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.32.2". assuming images are not preloaded.
I0414 16:31:47.817385 157245 ssh_runner.go:195] Run: which lz4
I0414 16:31:47.821171 157245 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
I0414 16:31:47.825038 157245 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
stdout:
stderr:
stat: cannot statx '/preloaded.tar.lz4': No such file or directory
I0414 16:31:47.825061 157245 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20349-149500/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (399124012 bytes)
I0414 16:31:49.106194 157245 crio.go:462] duration metric: took 1.285048967s to copy over tarball
I0414 16:31:49.106305 157245 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
I0414 16:31:51.232325 157245 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.125982911s)
I0414 16:31:51.232352 157245 crio.go:469] duration metric: took 2.126115401s to extract the tarball
I0414 16:31:51.232360 157245 ssh_runner.go:146] rm: /preloaded.tar.lz4
I0414 16:31:51.270529 157245 ssh_runner.go:195] Run: sudo crictl images --output json
I0414 16:31:51.317395 157245 crio.go:514] all images are preloaded for cri-o runtime.
I0414 16:31:51.317423 157245 cache_images.go:84] Images are preloaded, skipping loading
I0414 16:31:51.317433 157245 kubeadm.go:934] updating node { 192.168.39.237 8443 v1.32.2 crio true true} ...
I0414 16:31:51.317541 157245 kubeadm.go:946] kubelet [Unit]
Wants=crio.service
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.32.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-411768 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.237
[Install]
config:
{KubernetesVersion:v1.32.2 ClusterName:addons-411768 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
I0414 16:31:51.317619 157245 ssh_runner.go:195] Run: crio config
I0414 16:31:51.361120 157245 cni.go:84] Creating CNI manager for ""
I0414 16:31:51.361147 157245 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
I0414 16:31:51.361161 157245 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
I0414 16:31:51.361180 157245 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.237 APIServerPort:8443 KubernetesVersion:v1.32.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-411768 NodeName:addons-411768 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.237"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.237 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
I0414 16:31:51.361284 157245 kubeadm.go:195] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta4
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.39.237
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: unix:///var/run/crio/crio.sock
name: "addons-411768"
kubeletExtraArgs:
- name: "node-ip"
value: "192.168.39.237"
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta4
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.168.39.237"]
extraArgs:
- name: "enable-admission-plugins"
value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
- name: "allocate-node-cidrs"
value: "true"
- name: "leader-elect"
value: "false"
scheduler:
extraArgs:
- name: "leader-elect"
value: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
etcd:
local:
dataDir: /var/lib/minikube/etcd
extraArgs:
- name: "proxy-refresh-interval"
value: "70000"
kubernetesVersion: v1.32.2
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: cgroupfs
containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
hairpinMode: hairpin-veth
runtimeRequestTimeout: 15m
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%"
nodefs.inodesFree: "0%"
imagefs.available: "0%"
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
tcpCloseWaitTimeout: 0s
I0414 16:31:51.361344 157245 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.2
I0414 16:31:51.371245 157245 binaries.go:44] Found k8s binaries, skipping transfer
I0414 16:31:51.371304 157245 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I0414 16:31:51.380591 157245 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
I0414 16:31:51.396369 157245 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
I0414 16:31:51.412364 157245 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2293 bytes)
I0414 16:31:51.428617 157245 ssh_runner.go:195] Run: grep 192.168.39.237 control-plane.minikube.internal$ /etc/hosts
I0414 16:31:51.432186 157245 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.237 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0414 16:31:51.443795 157245 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0414 16:31:51.559741 157245 ssh_runner.go:195] Run: sudo systemctl start kubelet
I0414 16:31:51.576587 157245 certs.go:68] Setting up /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/addons-411768 for IP: 192.168.39.237
I0414 16:31:51.576605 157245 certs.go:194] generating shared ca certs ...
I0414 16:31:51.576623 157245 certs.go:226] acquiring lock for ca certs: {Name:mk65518f71a0fe967168d84423f624d889cf0622 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0414 16:31:51.576752 157245 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/20349-149500/.minikube/ca.key
I0414 16:31:51.816547 157245 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20349-149500/.minikube/ca.crt ...
I0414 16:31:51.816576 157245 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20349-149500/.minikube/ca.crt: {Name:mkc05f9e104f16e9c207e08de1afcb71287bd637 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0414 16:31:51.816738 157245 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20349-149500/.minikube/ca.key ...
I0414 16:31:51.816749 157245 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20349-149500/.minikube/ca.key: {Name:mka0d07948e661ee7d0f8d239e748e8feff38fc3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0414 16:31:51.816815 157245 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20349-149500/.minikube/proxy-client-ca.key
I0414 16:31:52.100992 157245 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20349-149500/.minikube/proxy-client-ca.crt ...
I0414 16:31:52.101023 157245 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20349-149500/.minikube/proxy-client-ca.crt: {Name:mkb8bfb6bf2583abdbb8dc4b7b2d17a28691dcfb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0414 16:31:52.101182 157245 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20349-149500/.minikube/proxy-client-ca.key ...
I0414 16:31:52.101194 157245 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20349-149500/.minikube/proxy-client-ca.key: {Name:mkccf03a6725626a3d35aa22dfe62e64adc6f399 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0414 16:31:52.101260 157245 certs.go:256] generating profile certs ...
I0414 16:31:52.101312 157245 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/addons-411768/client.key
I0414 16:31:52.101328 157245 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/addons-411768/client.crt with IP's: []
I0414 16:31:52.291999 157245 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/addons-411768/client.crt ...
I0414 16:31:52.292035 157245 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/addons-411768/client.crt: {Name:mk46b1618752311b5a3a60b53139a9d22b4ec008 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0414 16:31:52.292215 157245 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/addons-411768/client.key ...
I0414 16:31:52.292227 157245 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/addons-411768/client.key: {Name:mk0d1f2e4314112bb8b11a664cebc5e9f9dd4aa2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0414 16:31:52.292304 157245 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/addons-411768/apiserver.key.741b7197
I0414 16:31:52.292324 157245 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/addons-411768/apiserver.crt.741b7197 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.237]
I0414 16:31:52.452776 157245 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/addons-411768/apiserver.crt.741b7197 ...
I0414 16:31:52.452809 157245 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/addons-411768/apiserver.crt.741b7197: {Name:mkc8fd882d9fc7f1325037b403b47e35b8b745d9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0414 16:31:52.452980 157245 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/addons-411768/apiserver.key.741b7197 ...
I0414 16:31:52.452993 157245 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/addons-411768/apiserver.key.741b7197: {Name:mk7af8d7412836b3cabc64c25d860df9716a89b6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0414 16:31:52.453071 157245 certs.go:381] copying /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/addons-411768/apiserver.crt.741b7197 -> /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/addons-411768/apiserver.crt
I0414 16:31:52.453172 157245 certs.go:385] copying /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/addons-411768/apiserver.key.741b7197 -> /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/addons-411768/apiserver.key
I0414 16:31:52.453235 157245 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/addons-411768/proxy-client.key
I0414 16:31:52.453256 157245 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/addons-411768/proxy-client.crt with IP's: []
I0414 16:31:52.909585 157245 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/addons-411768/proxy-client.crt ...
I0414 16:31:52.909620 157245 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/addons-411768/proxy-client.crt: {Name:mk30d90411fde7688f22a78c7b25d39671d22af2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0414 16:31:52.909786 157245 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/addons-411768/proxy-client.key ...
I0414 16:31:52.909799 157245 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/addons-411768/proxy-client.key: {Name:mk4dd2fda4fc19d3541d44d291b72ed4affa80b9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0414 16:31:52.910035 157245 certs.go:484] found cert: /home/jenkins/minikube-integration/20349-149500/.minikube/certs/ca-key.pem (1679 bytes)
I0414 16:31:52.910078 157245 certs.go:484] found cert: /home/jenkins/minikube-integration/20349-149500/.minikube/certs/ca.pem (1082 bytes)
I0414 16:31:52.910106 157245 certs.go:484] found cert: /home/jenkins/minikube-integration/20349-149500/.minikube/certs/cert.pem (1123 bytes)
I0414 16:31:52.910131 157245 certs.go:484] found cert: /home/jenkins/minikube-integration/20349-149500/.minikube/certs/key.pem (1675 bytes)
I0414 16:31:52.910673 157245 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20349-149500/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I0414 16:31:52.938330 157245 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20349-149500/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
I0414 16:31:52.962684 157245 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20349-149500/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I0414 16:31:52.986413 157245 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20349-149500/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
I0414 16:31:53.010120 157245 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/addons-411768/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
I0414 16:31:53.033660 157245 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/addons-411768/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
I0414 16:31:53.057129 157245 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/addons-411768/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I0414 16:31:53.080122 157245 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20349-149500/.minikube/profiles/addons-411768/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
I0414 16:31:53.112722 157245 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20349-149500/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I0414 16:31:53.141259 157245 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
I0414 16:31:53.161383 157245 ssh_runner.go:195] Run: openssl version
I0414 16:31:53.167380 157245 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I0414 16:31:53.177593 157245 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I0414 16:31:53.181814 157245 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 14 16:31 /usr/share/ca-certificates/minikubeCA.pem
I0414 16:31:53.181882 157245 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I0414 16:31:53.187497 157245 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I0414 16:31:53.197397 157245 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
I0414 16:31:53.201753 157245 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
stdout:
stderr:
stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
I0414 16:31:53.201802 157245 kubeadm.go:392] StartCluster: {Name:addons-411768 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:addons-411768 Names
pace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.237 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirr
or: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0414 16:31:53.201905 157245 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
I0414 16:31:53.201943 157245 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
I0414 16:31:53.242352 157245 cri.go:89] found id: ""
I0414 16:31:53.242423 157245 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I0414 16:31:53.251899 157245 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I0414 16:31:53.261098 157245 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I0414 16:31:53.270416 157245 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I0414 16:31:53.270438 157245 kubeadm.go:157] found existing configuration files:
I0414 16:31:53.270482 157245 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
I0414 16:31:53.278904 157245 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/admin.conf: No such file or directory
I0414 16:31:53.278948 157245 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
I0414 16:31:53.287548 157245 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
I0414 16:31:53.303510 157245 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/kubelet.conf: No such file or directory
I0414 16:31:53.303571 157245 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
I0414 16:31:53.314813 157245 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
I0414 16:31:53.323856 157245 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/controller-manager.conf: No such file or directory
I0414 16:31:53.323921 157245 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
I0414 16:31:53.333173 157245 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
I0414 16:31:53.341922 157245 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/scheduler.conf: No such file or directory
I0414 16:31:53.341978 157245 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
I0414 16:31:53.351231 157245 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
I0414 16:31:53.402096 157245 kubeadm.go:310] [init] Using Kubernetes version: v1.32.2
I0414 16:31:53.402178 157245 kubeadm.go:310] [preflight] Running pre-flight checks
I0414 16:31:53.502653 157245 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
I0414 16:31:53.502817 157245 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
I0414 16:31:53.502948 157245 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
I0414 16:31:53.510466 157245 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
I0414 16:31:53.743764 157245 out.go:235] - Generating certificates and keys ...
I0414 16:31:53.743887 157245 kubeadm.go:310] [certs] Using existing ca certificate authority
I0414 16:31:53.743943 157245 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
I0414 16:31:53.744034 157245 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
I0414 16:31:53.971856 157245 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
I0414 16:31:54.101338 157245 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
I0414 16:31:54.448455 157245 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
I0414 16:31:54.791191 157245 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
I0414 16:31:54.791323 157245 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-411768 localhost] and IPs [192.168.39.237 127.0.0.1 ::1]
I0414 16:31:54.955997 157245 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
I0414 16:31:54.956217 157245 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-411768 localhost] and IPs [192.168.39.237 127.0.0.1 ::1]
I0414 16:31:55.014848 157245 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
I0414 16:31:55.105404 157245 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
I0414 16:31:55.220389 157245 kubeadm.go:310] [certs] Generating "sa" key and public key
I0414 16:31:55.220487 157245 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
I0414 16:31:55.314104 157245 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
I0414 16:31:55.507982 157245 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
I0414 16:31:55.788092 157245 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
I0414 16:31:56.132658 157245 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
I0414 16:31:56.241858 157245 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
I0414 16:31:56.242049 157245 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
I0414 16:31:56.244266 157245 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
I0414 16:31:56.245941 157245 out.go:235] - Booting up control plane ...
I0414 16:31:56.246023 157245 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
I0414 16:31:56.246090 157245 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
I0414 16:31:56.246547 157245 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
I0414 16:31:56.262347 157245 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
I0414 16:31:56.268582 157245 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
I0414 16:31:56.268638 157245 kubeadm.go:310] [kubelet-start] Starting the kubelet
I0414 16:31:56.396463 157245 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
I0414 16:31:56.396586 157245 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
I0414 16:31:56.897913 157245 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.995015ms
I0414 16:31:56.898020 157245 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
I0414 16:32:01.397257 157245 kubeadm.go:310] [api-check] The API server is healthy after 4.501396666s
I0414 16:32:01.408244 157245 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
I0414 16:32:01.420893 157245 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
I0414 16:32:01.445113 157245 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
I0414 16:32:01.445371 157245 kubeadm.go:310] [mark-control-plane] Marking the node addons-411768 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
I0414 16:32:01.459888 157245 kubeadm.go:310] [bootstrap-token] Using token: ajtwy5.kapiw6da3r2hdoce
I0414 16:32:01.461000 157245 out.go:235] - Configuring RBAC rules ...
I0414 16:32:01.461143 157245 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
I0414 16:32:01.464927 157245 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
I0414 16:32:01.470514 157245 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
I0414 16:32:01.473470 157245 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
I0414 16:32:01.478503 157245 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
I0414 16:32:01.481039 157245 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
I0414 16:32:01.803030 157245 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
I0414 16:32:02.243354 157245 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
I0414 16:32:02.803423 157245 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
I0414 16:32:02.804155 157245 kubeadm.go:310]
I0414 16:32:02.804217 157245 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
I0414 16:32:02.804222 157245 kubeadm.go:310]
I0414 16:32:02.804322 157245 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
I0414 16:32:02.804332 157245 kubeadm.go:310]
I0414 16:32:02.804366 157245 kubeadm.go:310] mkdir -p $HOME/.kube
I0414 16:32:02.804443 157245 kubeadm.go:310] sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
I0414 16:32:02.804527 157245 kubeadm.go:310] sudo chown $(id -u):$(id -g) $HOME/.kube/config
I0414 16:32:02.804545 157245 kubeadm.go:310]
I0414 16:32:02.804604 157245 kubeadm.go:310] Alternatively, if you are the root user, you can run:
I0414 16:32:02.804613 157245 kubeadm.go:310]
I0414 16:32:02.804687 157245 kubeadm.go:310] export KUBECONFIG=/etc/kubernetes/admin.conf
I0414 16:32:02.804698 157245 kubeadm.go:310]
I0414 16:32:02.804743 157245 kubeadm.go:310] You should now deploy a pod network to the cluster.
I0414 16:32:02.804808 157245 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
I0414 16:32:02.804890 157245 kubeadm.go:310] https://kubernetes.io/docs/concepts/cluster-administration/addons/
I0414 16:32:02.804898 157245 kubeadm.go:310]
I0414 16:32:02.805007 157245 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
I0414 16:32:02.805117 157245 kubeadm.go:310] and service account keys on each node and then running the following as root:
I0414 16:32:02.805128 157245 kubeadm.go:310]
I0414 16:32:02.805216 157245 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token ajtwy5.kapiw6da3r2hdoce \
I0414 16:32:02.805365 157245 kubeadm.go:310] --discovery-token-ca-cert-hash sha256:58a703ba5c74005b6eab34cf4b65ddf79c109f88fa30e8afe2d055c58debc01d \
I0414 16:32:02.805397 157245 kubeadm.go:310] --control-plane
I0414 16:32:02.805404 157245 kubeadm.go:310]
I0414 16:32:02.805491 157245 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
I0414 16:32:02.805499 157245 kubeadm.go:310]
I0414 16:32:02.805599 157245 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token ajtwy5.kapiw6da3r2hdoce \
I0414 16:32:02.805758 157245 kubeadm.go:310] --discovery-token-ca-cert-hash sha256:58a703ba5c74005b6eab34cf4b65ddf79c109f88fa30e8afe2d055c58debc01d
I0414 16:32:02.806631 157245 kubeadm.go:310] [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
I0414 16:32:02.806651 157245 cni.go:84] Creating CNI manager for ""
I0414 16:32:02.806658 157245 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
I0414 16:32:02.808086 157245 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
I0414 16:32:02.809134 157245 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
I0414 16:32:02.822996 157245 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
I0414 16:32:02.840888 157245 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
I0414 16:32:02.840991 157245 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
I0414 16:32:02.841016 157245 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-411768 minikube.k8s.io/updated_at=2025_04_14T16_32_02_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=f1e69a1cd498979c80dbe968253c827f6eb2cf37 minikube.k8s.io/name=addons-411768 minikube.k8s.io/primary=true
I0414 16:32:02.985991 157245 ops.go:34] apiserver oom_adj: -16
I0414 16:32:02.986098 157245 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0414 16:32:03.487047 157245 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0414 16:32:03.986223 157245 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0414 16:32:04.486993 157245 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0414 16:32:04.986829 157245 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0414 16:32:05.486806 157245 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0414 16:32:05.986582 157245 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0414 16:32:06.487112 157245 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0414 16:32:06.986262 157245 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0414 16:32:07.077673 157245 kubeadm.go:1113] duration metric: took 4.236743459s to wait for elevateKubeSystemPrivileges
I0414 16:32:07.077717 157245 kubeadm.go:394] duration metric: took 13.87591877s to StartCluster
I0414 16:32:07.077741 157245 settings.go:142] acquiring lock: {Name:mk0f1596f566b3225bf96154f374fff0641b21e0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0414 16:32:07.077896 157245 settings.go:150] Updating kubeconfig: /home/jenkins/minikube-integration/20349-149500/kubeconfig
I0414 16:32:07.078346 157245 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20349-149500/kubeconfig: {Name:mk04cc1ba53a15658f068f5563ce5e474cfc825b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0414 16:32:07.078532 157245 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
I0414 16:32:07.078574 157245 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.237 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:crio ControlPlane:true Worker:true}
I0414 16:32:07.078665 157245 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
I0414 16:32:07.078772 157245 config.go:182] Loaded profile config "addons-411768": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
I0414 16:32:07.078790 157245 addons.go:69] Setting yakd=true in profile "addons-411768"
I0414 16:32:07.078808 157245 addons.go:69] Setting ingress-dns=true in profile "addons-411768"
I0414 16:32:07.078834 157245 addons.go:69] Setting metrics-server=true in profile "addons-411768"
I0414 16:32:07.078840 157245 addons.go:69] Setting storage-provisioner=true in profile "addons-411768"
I0414 16:32:07.078843 157245 addons.go:238] Setting addon ingress-dns=true in "addons-411768"
I0414 16:32:07.078853 157245 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-411768"
I0414 16:32:07.078861 157245 addons.go:69] Setting registry=true in profile "addons-411768"
I0414 16:32:07.078867 157245 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-411768"
I0414 16:32:07.078883 157245 addons.go:238] Setting addon registry=true in "addons-411768"
I0414 16:32:07.078888 157245 host.go:66] Checking if "addons-411768" exists ...
I0414 16:32:07.078910 157245 host.go:66] Checking if "addons-411768" exists ...
I0414 16:32:07.078914 157245 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-411768"
I0414 16:32:07.078953 157245 addons.go:238] Setting addon nvidia-device-plugin=true in "addons-411768"
I0414 16:32:07.078934 157245 addons.go:69] Setting volcano=true in profile "addons-411768"
I0414 16:32:07.078989 157245 addons.go:238] Setting addon volcano=true in "addons-411768"
I0414 16:32:07.079004 157245 host.go:66] Checking if "addons-411768" exists ...
I0414 16:32:07.079014 157245 addons.go:69] Setting default-storageclass=true in profile "addons-411768"
I0414 16:32:07.079051 157245 host.go:66] Checking if "addons-411768" exists ...
I0414 16:32:07.079059 157245 addons.go:69] Setting amd-gpu-device-plugin=true in profile "addons-411768"
I0414 16:32:07.079074 157245 addons.go:238] Setting addon amd-gpu-device-plugin=true in "addons-411768"
I0414 16:32:07.079094 157245 host.go:66] Checking if "addons-411768" exists ...
I0414 16:32:07.079135 157245 addons.go:69] Setting cloud-spanner=true in profile "addons-411768"
I0414 16:32:07.079179 157245 addons.go:238] Setting addon cloud-spanner=true in "addons-411768"
I0414 16:32:07.079288 157245 host.go:66] Checking if "addons-411768" exists ...
I0414 16:32:07.079309 157245 addons.go:69] Setting ingress=true in profile "addons-411768"
I0414 16:32:07.079330 157245 addons.go:238] Setting addon ingress=true in "addons-411768"
I0414 16:32:07.079367 157245 host.go:66] Checking if "addons-411768" exists ...
I0414 16:32:07.079442 157245 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0414 16:32:07.079480 157245 main.go:141] libmachine: Launching plugin server for driver kvm2
I0414 16:32:07.079496 157245 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0414 16:32:07.079513 157245 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0414 16:32:07.079052 157245 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-411768"
I0414 16:32:07.078825 157245 addons.go:69] Setting inspektor-gadget=true in profile "addons-411768"
I0414 16:32:07.079540 157245 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0414 16:32:07.079553 157245 addons.go:238] Setting addon inspektor-gadget=true in "addons-411768"
I0414 16:32:07.079553 157245 main.go:141] libmachine: Launching plugin server for driver kvm2
I0414 16:32:07.079564 157245 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0414 16:32:07.079567 157245 main.go:141] libmachine: Launching plugin server for driver kvm2
I0414 16:32:07.079577 157245 addons.go:69] Setting gcp-auth=true in profile "addons-411768"
I0414 16:32:07.079582 157245 main.go:141] libmachine: Launching plugin server for driver kvm2
I0414 16:32:07.079595 157245 mustload.go:65] Loading cluster: addons-411768
I0414 16:32:07.079615 157245 main.go:141] libmachine: Launching plugin server for driver kvm2
I0414 16:32:07.078853 157245 addons.go:238] Setting addon storage-provisioner=true in "addons-411768"
I0414 16:32:07.078845 157245 addons.go:238] Setting addon metrics-server=true in "addons-411768"
I0414 16:32:07.079291 157245 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-411768"
I0414 16:32:07.078815 157245 addons.go:238] Setting addon yakd=true in "addons-411768"
I0414 16:32:07.079701 157245 addons.go:238] Setting addon csi-hostpath-driver=true in "addons-411768"
I0414 16:32:07.079538 157245 addons.go:69] Setting volumesnapshots=true in profile "addons-411768"
I0414 16:32:07.079747 157245 addons.go:238] Setting addon volumesnapshots=true in "addons-411768"
I0414 16:32:07.079788 157245 host.go:66] Checking if "addons-411768" exists ...
I0414 16:32:07.079797 157245 host.go:66] Checking if "addons-411768" exists ...
I0414 16:32:07.079922 157245 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0414 16:32:07.079945 157245 main.go:141] libmachine: Launching plugin server for driver kvm2
I0414 16:32:07.079965 157245 config.go:182] Loaded profile config "addons-411768": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.2
I0414 16:32:07.079976 157245 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0414 16:32:07.080017 157245 main.go:141] libmachine: Launching plugin server for driver kvm2
I0414 16:32:07.080140 157245 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0414 16:32:07.080154 157245 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0414 16:32:07.080160 157245 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0414 16:32:07.080166 157245 main.go:141] libmachine: Launching plugin server for driver kvm2
I0414 16:32:07.080170 157245 main.go:141] libmachine: Launching plugin server for driver kvm2
I0414 16:32:07.080172 157245 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0414 16:32:07.080191 157245 main.go:141] libmachine: Launching plugin server for driver kvm2
I0414 16:32:07.080193 157245 main.go:141] libmachine: Launching plugin server for driver kvm2
I0414 16:32:07.080236 157245 host.go:66] Checking if "addons-411768" exists ...
I0414 16:32:07.080248 157245 host.go:66] Checking if "addons-411768" exists ...
I0414 16:32:07.080320 157245 host.go:66] Checking if "addons-411768" exists ...
I0414 16:32:07.080324 157245 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0414 16:32:07.080353 157245 host.go:66] Checking if "addons-411768" exists ...
I0414 16:32:07.080354 157245 main.go:141] libmachine: Launching plugin server for driver kvm2
I0414 16:32:07.080561 157245 out.go:177] * Verifying Kubernetes components...
I0414 16:32:07.082053 157245 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0414 16:32:07.100792 157245 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43937
I0414 16:32:07.100984 157245 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35649
I0414 16:32:07.101323 157245 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38819
I0414 16:32:07.101508 157245 main.go:141] libmachine: () Calling .GetVersion
I0414 16:32:07.101558 157245 main.go:141] libmachine: () Calling .GetVersion
I0414 16:32:07.101974 157245 main.go:141] libmachine: () Calling .GetVersion
I0414 16:32:07.102128 157245 main.go:141] libmachine: Using API Version 1
I0414 16:32:07.102149 157245 main.go:141] libmachine: () Calling .SetConfigRaw
I0414 16:32:07.102535 157245 main.go:141] libmachine: () Calling .GetMachineName
I0414 16:32:07.102551 157245 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38947
I0414 16:32:07.103008 157245 main.go:141] libmachine: Using API Version 1
I0414 16:32:07.103028 157245 main.go:141] libmachine: () Calling .SetConfigRaw
I0414 16:32:07.103053 157245 main.go:141] libmachine: () Calling .GetVersion
I0414 16:32:07.103319 157245 main.go:141] libmachine: () Calling .GetMachineName
I0414 16:32:07.103012 157245 main.go:141] libmachine: Using API Version 1
I0414 16:32:07.103157 157245 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0414 16:32:07.103397 157245 main.go:141] libmachine: Launching plugin server for driver kvm2
I0414 16:32:07.103361 157245 main.go:141] libmachine: () Calling .SetConfigRaw
I0414 16:32:07.104001 157245 main.go:141] libmachine: () Calling .GetMachineName
I0414 16:32:07.104028 157245 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45921
I0414 16:32:07.104229 157245 main.go:141] libmachine: Using API Version 1
I0414 16:32:07.104244 157245 main.go:141] libmachine: () Calling .SetConfigRaw
I0414 16:32:07.104974 157245 main.go:141] libmachine: () Calling .GetMachineName
I0414 16:32:07.106159 157245 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0414 16:32:07.106207 157245 main.go:141] libmachine: Launching plugin server for driver kvm2
I0414 16:32:07.106287 157245 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0414 16:32:07.106319 157245 main.go:141] libmachine: Launching plugin server for driver kvm2
I0414 16:32:07.106448 157245 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0414 16:32:07.106497 157245 main.go:141] libmachine: Launching plugin server for driver kvm2
I0414 16:32:07.106563 157245 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0414 16:32:07.106588 157245 main.go:141] libmachine: Launching plugin server for driver kvm2
I0414 16:32:07.106786 157245 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0414 16:32:07.106814 157245 main.go:141] libmachine: Launching plugin server for driver kvm2
I0414 16:32:07.106924 157245 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0414 16:32:07.106964 157245 main.go:141] libmachine: Launching plugin server for driver kvm2
I0414 16:32:07.107406 157245 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0414 16:32:07.107444 157245 main.go:141] libmachine: Launching plugin server for driver kvm2
I0414 16:32:07.107481 157245 main.go:141] libmachine: () Calling .GetVersion
I0414 16:32:07.107575 157245 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46207
I0414 16:32:07.108205 157245 main.go:141] libmachine: Using API Version 1
I0414 16:32:07.108226 157245 main.go:141] libmachine: () Calling .SetConfigRaw
I0414 16:32:07.108890 157245 main.go:141] libmachine: () Calling .GetMachineName
I0414 16:32:07.108989 157245 main.go:141] libmachine: () Calling .GetVersion
I0414 16:32:07.109580 157245 main.go:141] libmachine: Using API Version 1
I0414 16:32:07.109599 157245 main.go:141] libmachine: () Calling .SetConfigRaw
I0414 16:32:07.109973 157245 main.go:141] libmachine: () Calling .GetMachineName
I0414 16:32:07.110985 157245 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0414 16:32:07.111019 157245 main.go:141] libmachine: Launching plugin server for driver kvm2
I0414 16:32:07.139383 157245 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37875
I0414 16:32:07.140059 157245 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0414 16:32:07.140111 157245 main.go:141] libmachine: Launching plugin server for driver kvm2
I0414 16:32:07.140376 157245 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40955
I0414 16:32:07.141086 157245 main.go:141] libmachine: () Calling .GetVersion
I0414 16:32:07.141228 157245 main.go:141] libmachine: () Calling .GetVersion
I0414 16:32:07.141915 157245 main.go:141] libmachine: Using API Version 1
I0414 16:32:07.141937 157245 main.go:141] libmachine: () Calling .SetConfigRaw
I0414 16:32:07.142497 157245 main.go:141] libmachine: () Calling .GetMachineName
I0414 16:32:07.142873 157245 main.go:141] libmachine: (addons-411768) Calling .GetState
I0414 16:32:07.147010 157245 addons.go:238] Setting addon storage-provisioner-rancher=true in "addons-411768"
I0414 16:32:07.147060 157245 host.go:66] Checking if "addons-411768" exists ...
I0414 16:32:07.147527 157245 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0414 16:32:07.147571 157245 main.go:141] libmachine: Launching plugin server for driver kvm2
I0414 16:32:07.147823 157245 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34245
I0414 16:32:07.148696 157245 main.go:141] libmachine: () Calling .GetVersion
I0414 16:32:07.148936 157245 main.go:141] libmachine: Using API Version 1
I0414 16:32:07.148958 157245 main.go:141] libmachine: () Calling .SetConfigRaw
I0414 16:32:07.149379 157245 main.go:141] libmachine: () Calling .GetMachineName
I0414 16:32:07.149559 157245 main.go:141] libmachine: Using API Version 1
I0414 16:32:07.149586 157245 main.go:141] libmachine: () Calling .SetConfigRaw
I0414 16:32:07.149906 157245 main.go:141] libmachine: (addons-411768) Calling .GetState
I0414 16:32:07.150643 157245 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41571
I0414 16:32:07.150878 157245 main.go:141] libmachine: () Calling .GetMachineName
I0414 16:32:07.151083 157245 main.go:141] libmachine: (addons-411768) Calling .GetState
I0414 16:32:07.151274 157245 main.go:141] libmachine: () Calling .GetVersion
I0414 16:32:07.151842 157245 main.go:141] libmachine: (addons-411768) Calling .DriverName
I0414 16:32:07.152109 157245 main.go:141] libmachine: Making call to close driver server
I0414 16:32:07.152136 157245 main.go:141] libmachine: (addons-411768) Calling .Close
I0414 16:32:07.152375 157245 main.go:141] libmachine: Successfully made call to close driver server
I0414 16:32:07.152389 157245 main.go:141] libmachine: Making call to close connection to plugin binary
I0414 16:32:07.152409 157245 main.go:141] libmachine: Making call to close driver server
I0414 16:32:07.152468 157245 main.go:141] libmachine: (addons-411768) Calling .Close
I0414 16:32:07.152563 157245 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42999
I0414 16:32:07.153139 157245 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41205
I0414 16:32:07.153563 157245 main.go:141] libmachine: () Calling .GetVersion
I0414 16:32:07.153664 157245 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45599
I0414 16:32:07.154008 157245 main.go:141] libmachine: () Calling .GetVersion
I0414 16:32:07.154215 157245 main.go:141] libmachine: () Calling .GetVersion
I0414 16:32:07.154426 157245 main.go:141] libmachine: Using API Version 1
I0414 16:32:07.154447 157245 main.go:141] libmachine: () Calling .SetConfigRaw
I0414 16:32:07.154451 157245 main.go:141] libmachine: (addons-411768) DBG | Closing plugin on server side
I0414 16:32:07.154486 157245 main.go:141] libmachine: Successfully made call to close driver server
I0414 16:32:07.154504 157245 main.go:141] libmachine: Making call to close connection to plugin binary
W0414 16:32:07.154646 157245 out.go:270] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
I0414 16:32:07.154793 157245 main.go:141] libmachine: Using API Version 1
I0414 16:32:07.154827 157245 main.go:141] libmachine: () Calling .SetConfigRaw
I0414 16:32:07.154899 157245 main.go:141] libmachine: () Calling .GetMachineName
I0414 16:32:07.155198 157245 main.go:141] libmachine: () Calling .GetMachineName
I0414 16:32:07.155525 157245 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40673
I0414 16:32:07.155603 157245 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0414 16:32:07.155628 157245 main.go:141] libmachine: Launching plugin server for driver kvm2
I0414 16:32:07.155685 157245 main.go:141] libmachine: (addons-411768) Calling .GetState
I0414 16:32:07.156374 157245 main.go:141] libmachine: Using API Version 1
I0414 16:32:07.156438 157245 main.go:141] libmachine: () Calling .SetConfigRaw
I0414 16:32:07.156522 157245 main.go:141] libmachine: Using API Version 1
I0414 16:32:07.156548 157245 main.go:141] libmachine: () Calling .SetConfigRaw
I0414 16:32:07.156939 157245 main.go:141] libmachine: () Calling .GetMachineName
I0414 16:32:07.157012 157245 host.go:66] Checking if "addons-411768" exists ...
I0414 16:32:07.157230 157245 main.go:141] libmachine: () Calling .GetMachineName
I0414 16:32:07.157649 157245 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0414 16:32:07.157676 157245 main.go:141] libmachine: Launching plugin server for driver kvm2
I0414 16:32:07.166544 157245 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0414 16:32:07.166595 157245 main.go:141] libmachine: Launching plugin server for driver kvm2
I0414 16:32:07.167078 157245 main.go:141] libmachine: () Calling .GetVersion
I0414 16:32:07.167221 157245 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37285
I0414 16:32:07.168299 157245 addons.go:238] Setting addon default-storageclass=true in "addons-411768"
I0414 16:32:07.168337 157245 host.go:66] Checking if "addons-411768" exists ...
I0414 16:32:07.168737 157245 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0414 16:32:07.168761 157245 main.go:141] libmachine: Launching plugin server for driver kvm2
I0414 16:32:07.169027 157245 main.go:141] libmachine: () Calling .GetVersion
I0414 16:32:07.169440 157245 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0414 16:32:07.169469 157245 main.go:141] libmachine: Launching plugin server for driver kvm2
I0414 16:32:07.170052 157245 main.go:141] libmachine: Using API Version 1
I0414 16:32:07.170084 157245 main.go:141] libmachine: () Calling .SetConfigRaw
I0414 16:32:07.170108 157245 main.go:141] libmachine: Using API Version 1
I0414 16:32:07.170124 157245 main.go:141] libmachine: () Calling .SetConfigRaw
I0414 16:32:07.170571 157245 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33449
I0414 16:32:07.170710 157245 main.go:141] libmachine: () Calling .GetMachineName
I0414 16:32:07.170768 157245 main.go:141] libmachine: () Calling .GetMachineName
I0414 16:32:07.170966 157245 main.go:141] libmachine: () Calling .GetVersion
I0414 16:32:07.171398 157245 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0414 16:32:07.171433 157245 main.go:141] libmachine: Launching plugin server for driver kvm2
I0414 16:32:07.171607 157245 main.go:141] libmachine: (addons-411768) Calling .GetState
I0414 16:32:07.171739 157245 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37227
I0414 16:32:07.172096 157245 main.go:141] libmachine: Using API Version 1
I0414 16:32:07.172116 157245 main.go:141] libmachine: () Calling .SetConfigRaw
I0414 16:32:07.172628 157245 main.go:141] libmachine: () Calling .GetMachineName
I0414 16:32:07.173234 157245 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0414 16:32:07.173276 157245 main.go:141] libmachine: Launching plugin server for driver kvm2
I0414 16:32:07.173649 157245 main.go:141] libmachine: (addons-411768) Calling .DriverName
I0414 16:32:07.173864 157245 main.go:141] libmachine: () Calling .GetVersion
I0414 16:32:07.174302 157245 main.go:141] libmachine: Using API Version 1
I0414 16:32:07.174318 157245 main.go:141] libmachine: () Calling .SetConfigRaw
I0414 16:32:07.174464 157245 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43311
I0414 16:32:07.174901 157245 main.go:141] libmachine: () Calling .GetMachineName
I0414 16:32:07.174982 157245 main.go:141] libmachine: () Calling .GetVersion
I0414 16:32:07.175521 157245 main.go:141] libmachine: Using API Version 1
I0414 16:32:07.175540 157245 main.go:141] libmachine: () Calling .SetConfigRaw
I0414 16:32:07.175605 157245 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46727
I0414 16:32:07.175657 157245 out.go:177] - Using image registry.k8s.io/ingress-nginx/controller:v1.11.3
I0414 16:32:07.176200 157245 main.go:141] libmachine: () Calling .GetMachineName
I0414 16:32:07.176421 157245 main.go:141] libmachine: (addons-411768) Calling .GetState
I0414 16:32:07.177749 157245 out.go:177] - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
I0414 16:32:07.178217 157245 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0414 16:32:07.178263 157245 main.go:141] libmachine: Launching plugin server for driver kvm2
I0414 16:32:07.178474 157245 main.go:141] libmachine: () Calling .GetVersion
I0414 16:32:07.179207 157245 main.go:141] libmachine: (addons-411768) Calling .DriverName
I0414 16:32:07.179906 157245 out.go:177] - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
I0414 16:32:07.180518 157245 out.go:177] - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
I0414 16:32:07.181218 157245 main.go:141] libmachine: Using API Version 1
I0414 16:32:07.181240 157245 main.go:141] libmachine: () Calling .SetConfigRaw
I0414 16:32:07.181498 157245 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
I0414 16:32:07.181519 157245 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
I0414 16:32:07.181538 157245 main.go:141] libmachine: (addons-411768) Calling .GetSSHHostname
I0414 16:32:07.181814 157245 main.go:141] libmachine: () Calling .GetMachineName
I0414 16:32:07.181864 157245 addons.go:435] installing /etc/kubernetes/addons/ingress-deploy.yaml
I0414 16:32:07.181879 157245 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
I0414 16:32:07.181895 157245 main.go:141] libmachine: (addons-411768) Calling .GetSSHHostname
I0414 16:32:07.182055 157245 main.go:141] libmachine: (addons-411768) Calling .GetState
I0414 16:32:07.184510 157245 main.go:141] libmachine: (addons-411768) Calling .DriverName
I0414 16:32:07.186720 157245 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36159
I0414 16:32:07.187062 157245 main.go:141] libmachine: (addons-411768) DBG | domain addons-411768 has defined MAC address 52:54:00:81:2d:89 in network mk-addons-411768
I0414 16:32:07.187251 157245 main.go:141] libmachine: () Calling .GetVersion
I0414 16:32:07.187505 157245 out.go:177] - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.8
I0414 16:32:07.187748 157245 main.go:141] libmachine: Using API Version 1
I0414 16:32:07.187764 157245 main.go:141] libmachine: () Calling .SetConfigRaw
I0414 16:32:07.188252 157245 main.go:141] libmachine: (addons-411768) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:2d:89", ip: ""} in network mk-addons-411768: {Iface:virbr1 ExpiryTime:2025-04-14 17:31:33 +0000 UTC Type:0 Mac:52:54:00:81:2d:89 Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:addons-411768 Clientid:01:52:54:00:81:2d:89}
I0414 16:32:07.188272 157245 main.go:141] libmachine: (addons-411768) DBG | domain addons-411768 has defined IP address 192.168.39.237 and MAC address 52:54:00:81:2d:89 in network mk-addons-411768
I0414 16:32:07.188315 157245 main.go:141] libmachine: () Calling .GetMachineName
I0414 16:32:07.188511 157245 main.go:141] libmachine: (addons-411768) Calling .GetState
I0414 16:32:07.189052 157245 main.go:141] libmachine: (addons-411768) DBG | domain addons-411768 has defined MAC address 52:54:00:81:2d:89 in network mk-addons-411768
I0414 16:32:07.189775 157245 main.go:141] libmachine: (addons-411768) Calling .GetSSHPort
I0414 16:32:07.190164 157245 main.go:141] libmachine: (addons-411768) Calling .DriverName
I0414 16:32:07.190286 157245 out.go:177] - Using image docker.io/registry:2.8.3
I0414 16:32:07.190788 157245 main.go:141] libmachine: (addons-411768) Calling .GetSSHKeyPath
I0414 16:32:07.190982 157245 main.go:141] libmachine: (addons-411768) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:2d:89", ip: ""} in network mk-addons-411768: {Iface:virbr1 ExpiryTime:2025-04-14 17:31:33 +0000 UTC Type:0 Mac:52:54:00:81:2d:89 Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:addons-411768 Clientid:01:52:54:00:81:2d:89}
I0414 16:32:07.190985 157245 main.go:141] libmachine: (addons-411768) Calling .GetSSHUsername
I0414 16:32:07.191008 157245 main.go:141] libmachine: (addons-411768) DBG | domain addons-411768 has defined IP address 192.168.39.237 and MAC address 52:54:00:81:2d:89 in network mk-addons-411768
I0414 16:32:07.191206 157245 sshutil.go:53] new ssh client: &{IP:192.168.39.237 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20349-149500/.minikube/machines/addons-411768/id_rsa Username:docker}
I0414 16:32:07.191582 157245 out.go:177] - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
I0414 16:32:07.191662 157245 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35417
I0414 16:32:07.191696 157245 addons.go:435] installing /etc/kubernetes/addons/registry-rc.yaml
I0414 16:32:07.191714 157245 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
I0414 16:32:07.191731 157245 main.go:141] libmachine: (addons-411768) Calling .GetSSHHostname
I0414 16:32:07.191797 157245 main.go:141] libmachine: (addons-411768) Calling .GetSSHPort
I0414 16:32:07.192031 157245 main.go:141] libmachine: (addons-411768) Calling .GetSSHKeyPath
I0414 16:32:07.192638 157245 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
I0414 16:32:07.192655 157245 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
I0414 16:32:07.192672 157245 main.go:141] libmachine: (addons-411768) Calling .GetSSHHostname
I0414 16:32:07.192734 157245 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37699
I0414 16:32:07.192733 157245 main.go:141] libmachine: (addons-411768) Calling .GetSSHUsername
I0414 16:32:07.193486 157245 main.go:141] libmachine: () Calling .GetVersion
I0414 16:32:07.193581 157245 sshutil.go:53] new ssh client: &{IP:192.168.39.237 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20349-149500/.minikube/machines/addons-411768/id_rsa Username:docker}
I0414 16:32:07.194125 157245 main.go:141] libmachine: Using API Version 1
I0414 16:32:07.194147 157245 main.go:141] libmachine: () Calling .SetConfigRaw
I0414 16:32:07.194605 157245 main.go:141] libmachine: () Calling .GetMachineName
I0414 16:32:07.195441 157245 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0414 16:32:07.195492 157245 main.go:141] libmachine: Launching plugin server for driver kvm2
I0414 16:32:07.197319 157245 main.go:141] libmachine: (addons-411768) DBG | domain addons-411768 has defined MAC address 52:54:00:81:2d:89 in network mk-addons-411768
I0414 16:32:07.197626 157245 main.go:141] libmachine: (addons-411768) DBG | domain addons-411768 has defined MAC address 52:54:00:81:2d:89 in network mk-addons-411768
I0414 16:32:07.197727 157245 main.go:141] libmachine: (addons-411768) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:2d:89", ip: ""} in network mk-addons-411768: {Iface:virbr1 ExpiryTime:2025-04-14 17:31:33 +0000 UTC Type:0 Mac:52:54:00:81:2d:89 Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:addons-411768 Clientid:01:52:54:00:81:2d:89}
I0414 16:32:07.197760 157245 main.go:141] libmachine: (addons-411768) DBG | domain addons-411768 has defined IP address 192.168.39.237 and MAC address 52:54:00:81:2d:89 in network mk-addons-411768
I0414 16:32:07.198074 157245 main.go:141] libmachine: (addons-411768) Calling .GetSSHPort
I0414 16:32:07.198172 157245 main.go:141] libmachine: () Calling .GetVersion
I0414 16:32:07.198425 157245 main.go:141] libmachine: (addons-411768) Calling .GetSSHKeyPath
I0414 16:32:07.198441 157245 main.go:141] libmachine: (addons-411768) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:2d:89", ip: ""} in network mk-addons-411768: {Iface:virbr1 ExpiryTime:2025-04-14 17:31:33 +0000 UTC Type:0 Mac:52:54:00:81:2d:89 Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:addons-411768 Clientid:01:52:54:00:81:2d:89}
I0414 16:32:07.198465 157245 main.go:141] libmachine: (addons-411768) DBG | domain addons-411768 has defined IP address 192.168.39.237 and MAC address 52:54:00:81:2d:89 in network mk-addons-411768
I0414 16:32:07.198484 157245 main.go:141] libmachine: (addons-411768) Calling .GetSSHPort
I0414 16:32:07.198678 157245 main.go:141] libmachine: (addons-411768) Calling .GetSSHUsername
I0414 16:32:07.198695 157245 main.go:141] libmachine: (addons-411768) Calling .GetSSHKeyPath
I0414 16:32:07.198891 157245 sshutil.go:53] new ssh client: &{IP:192.168.39.237 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20349-149500/.minikube/machines/addons-411768/id_rsa Username:docker}
I0414 16:32:07.198901 157245 main.go:141] libmachine: (addons-411768) Calling .GetSSHUsername
I0414 16:32:07.199108 157245 sshutil.go:53] new ssh client: &{IP:192.168.39.237 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20349-149500/.minikube/machines/addons-411768/id_rsa Username:docker}
I0414 16:32:07.200232 157245 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39239
I0414 16:32:07.200716 157245 main.go:141] libmachine: Using API Version 1
I0414 16:32:07.200737 157245 main.go:141] libmachine: () Calling .SetConfigRaw
I0414 16:32:07.200828 157245 main.go:141] libmachine: () Calling .GetVersion
I0414 16:32:07.201113 157245 main.go:141] libmachine: () Calling .GetMachineName
I0414 16:32:07.201433 157245 main.go:141] libmachine: Using API Version 1
I0414 16:32:07.201460 157245 main.go:141] libmachine: () Calling .SetConfigRaw
I0414 16:32:07.201811 157245 main.go:141] libmachine: () Calling .GetMachineName
I0414 16:32:07.201889 157245 main.go:141] libmachine: (addons-411768) Calling .GetState
I0414 16:32:07.202096 157245 main.go:141] libmachine: (addons-411768) Calling .DriverName
I0414 16:32:07.203805 157245 main.go:141] libmachine: (addons-411768) Calling .DriverName
I0414 16:32:07.205350 157245 out.go:177] - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
I0414 16:32:07.206613 157245 addons.go:435] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
I0414 16:32:07.206633 157245 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
I0414 16:32:07.206650 157245 main.go:141] libmachine: (addons-411768) Calling .GetSSHHostname
I0414 16:32:07.210509 157245 main.go:141] libmachine: (addons-411768) DBG | domain addons-411768 has defined MAC address 52:54:00:81:2d:89 in network mk-addons-411768
I0414 16:32:07.210985 157245 main.go:141] libmachine: (addons-411768) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:2d:89", ip: ""} in network mk-addons-411768: {Iface:virbr1 ExpiryTime:2025-04-14 17:31:33 +0000 UTC Type:0 Mac:52:54:00:81:2d:89 Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:addons-411768 Clientid:01:52:54:00:81:2d:89}
I0414 16:32:07.211006 157245 main.go:141] libmachine: (addons-411768) DBG | domain addons-411768 has defined IP address 192.168.39.237 and MAC address 52:54:00:81:2d:89 in network mk-addons-411768
I0414 16:32:07.211198 157245 main.go:141] libmachine: (addons-411768) Calling .GetSSHPort
I0414 16:32:07.211363 157245 main.go:141] libmachine: (addons-411768) Calling .GetSSHKeyPath
I0414 16:32:07.211534 157245 main.go:141] libmachine: (addons-411768) Calling .GetSSHUsername
I0414 16:32:07.211599 157245 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44977
I0414 16:32:07.211973 157245 sshutil.go:53] new ssh client: &{IP:192.168.39.237 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20349-149500/.minikube/machines/addons-411768/id_rsa Username:docker}
I0414 16:32:07.212415 157245 main.go:141] libmachine: () Calling .GetVersion
I0414 16:32:07.212839 157245 main.go:141] libmachine: Using API Version 1
I0414 16:32:07.212874 157245 main.go:141] libmachine: () Calling .SetConfigRaw
I0414 16:32:07.213060 157245 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36957
I0414 16:32:07.213269 157245 main.go:141] libmachine: () Calling .GetMachineName
I0414 16:32:07.213431 157245 main.go:141] libmachine: (addons-411768) Calling .GetState
I0414 16:32:07.213434 157245 main.go:141] libmachine: () Calling .GetVersion
I0414 16:32:07.214366 157245 main.go:141] libmachine: Using API Version 1
I0414 16:32:07.214408 157245 main.go:141] libmachine: () Calling .SetConfigRaw
I0414 16:32:07.215015 157245 main.go:141] libmachine: () Calling .GetMachineName
I0414 16:32:07.215104 157245 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41975
I0414 16:32:07.215269 157245 main.go:141] libmachine: (addons-411768) Calling .GetState
I0414 16:32:07.215611 157245 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35381
I0414 16:32:07.215662 157245 main.go:141] libmachine: () Calling .GetVersion
I0414 16:32:07.215770 157245 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42093
I0414 16:32:07.216109 157245 main.go:141] libmachine: () Calling .GetVersion
I0414 16:32:07.216207 157245 main.go:141] libmachine: Using API Version 1
I0414 16:32:07.216247 157245 main.go:141] libmachine: () Calling .SetConfigRaw
I0414 16:32:07.216465 157245 main.go:141] libmachine: (addons-411768) Calling .DriverName
I0414 16:32:07.216837 157245 main.go:141] libmachine: () Calling .GetMachineName
I0414 16:32:07.217020 157245 main.go:141] libmachine: () Calling .GetVersion
I0414 16:32:07.217173 157245 main.go:141] libmachine: (addons-411768) Calling .GetState
I0414 16:32:07.217361 157245 main.go:141] libmachine: (addons-411768) Calling .DriverName
I0414 16:32:07.217521 157245 main.go:141] libmachine: Using API Version 1
I0414 16:32:07.217548 157245 main.go:141] libmachine: () Calling .SetConfigRaw
I0414 16:32:07.217677 157245 main.go:141] libmachine: Using API Version 1
I0414 16:32:07.217697 157245 main.go:141] libmachine: () Calling .SetConfigRaw
I0414 16:32:07.218192 157245 out.go:177] - Using image gcr.io/k8s-minikube/storage-provisioner:v5
I0414 16:32:07.218203 157245 main.go:141] libmachine: () Calling .GetMachineName
I0414 16:32:07.218396 157245 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46505
I0414 16:32:07.218677 157245 main.go:141] libmachine: () Calling .GetMachineName
I0414 16:32:07.218779 157245 main.go:141] libmachine: () Calling .GetVersion
I0414 16:32:07.218818 157245 out.go:177] - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.37.0
I0414 16:32:07.219031 157245 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0414 16:32:07.219082 157245 main.go:141] libmachine: Launching plugin server for driver kvm2
I0414 16:32:07.219246 157245 main.go:141] libmachine: (addons-411768) Calling .DriverName
I0414 16:32:07.219360 157245 main.go:141] libmachine: Using API Version 1
I0414 16:32:07.219380 157245 main.go:141] libmachine: () Calling .SetConfigRaw
I0414 16:32:07.219389 157245 main.go:141] libmachine: (addons-411768) Calling .GetState
I0414 16:32:07.219879 157245 main.go:141] libmachine: () Calling .GetMachineName
I0414 16:32:07.219919 157245 addons.go:435] installing /etc/kubernetes/addons/ig-crd.yaml
I0414 16:32:07.219939 157245 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5248 bytes)
I0414 16:32:07.219957 157245 main.go:141] libmachine: (addons-411768) Calling .GetSSHHostname
I0414 16:32:07.220083 157245 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
I0414 16:32:07.220101 157245 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
I0414 16:32:07.220116 157245 main.go:141] libmachine: (addons-411768) Calling .GetSSHHostname
I0414 16:32:07.220458 157245 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0414 16:32:07.220511 157245 main.go:141] libmachine: Launching plugin server for driver kvm2
I0414 16:32:07.221225 157245 out.go:177] - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
I0414 16:32:07.222073 157245 main.go:141] libmachine: (addons-411768) Calling .DriverName
I0414 16:32:07.223405 157245 out.go:177] - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
I0414 16:32:07.223418 157245 out.go:177] - Using image nvcr.io/nvidia/k8s-device-plugin:v0.17.0
I0414 16:32:07.224216 157245 main.go:141] libmachine: (addons-411768) DBG | domain addons-411768 has defined MAC address 52:54:00:81:2d:89 in network mk-addons-411768
I0414 16:32:07.224492 157245 main.go:141] libmachine: (addons-411768) DBG | domain addons-411768 has defined MAC address 52:54:00:81:2d:89 in network mk-addons-411768
I0414 16:32:07.224521 157245 addons.go:435] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
I0414 16:32:07.224540 157245 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
I0414 16:32:07.224556 157245 main.go:141] libmachine: (addons-411768) Calling .GetSSHHostname
I0414 16:32:07.224655 157245 main.go:141] libmachine: (addons-411768) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:2d:89", ip: ""} in network mk-addons-411768: {Iface:virbr1 ExpiryTime:2025-04-14 17:31:33 +0000 UTC Type:0 Mac:52:54:00:81:2d:89 Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:addons-411768 Clientid:01:52:54:00:81:2d:89}
I0414 16:32:07.224665 157245 main.go:141] libmachine: (addons-411768) DBG | domain addons-411768 has defined IP address 192.168.39.237 and MAC address 52:54:00:81:2d:89 in network mk-addons-411768
I0414 16:32:07.224838 157245 main.go:141] libmachine: (addons-411768) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:2d:89", ip: ""} in network mk-addons-411768: {Iface:virbr1 ExpiryTime:2025-04-14 17:31:33 +0000 UTC Type:0 Mac:52:54:00:81:2d:89 Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:addons-411768 Clientid:01:52:54:00:81:2d:89}
I0414 16:32:07.224887 157245 main.go:141] libmachine: (addons-411768) Calling .GetSSHPort
I0414 16:32:07.224935 157245 main.go:141] libmachine: (addons-411768) DBG | domain addons-411768 has defined IP address 192.168.39.237 and MAC address 52:54:00:81:2d:89 in network mk-addons-411768
I0414 16:32:07.225054 157245 main.go:141] libmachine: (addons-411768) Calling .GetSSHKeyPath
I0414 16:32:07.225090 157245 main.go:141] libmachine: (addons-411768) Calling .GetSSHPort
I0414 16:32:07.225234 157245 main.go:141] libmachine: (addons-411768) Calling .GetSSHKeyPath
I0414 16:32:07.225350 157245 main.go:141] libmachine: (addons-411768) Calling .GetSSHUsername
I0414 16:32:07.225487 157245 main.go:141] libmachine: (addons-411768) Calling .GetSSHUsername
I0414 16:32:07.225480 157245 sshutil.go:53] new ssh client: &{IP:192.168.39.237 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20349-149500/.minikube/machines/addons-411768/id_rsa Username:docker}
I0414 16:32:07.225632 157245 sshutil.go:53] new ssh client: &{IP:192.168.39.237 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20349-149500/.minikube/machines/addons-411768/id_rsa Username:docker}
I0414 16:32:07.225733 157245 out.go:177] - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
I0414 16:32:07.226989 157245 out.go:177] - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
I0414 16:32:07.227980 157245 out.go:177] - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
I0414 16:32:07.228203 157245 main.go:141] libmachine: (addons-411768) DBG | domain addons-411768 has defined MAC address 52:54:00:81:2d:89 in network mk-addons-411768
I0414 16:32:07.228578 157245 main.go:141] libmachine: (addons-411768) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:2d:89", ip: ""} in network mk-addons-411768: {Iface:virbr1 ExpiryTime:2025-04-14 17:31:33 +0000 UTC Type:0 Mac:52:54:00:81:2d:89 Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:addons-411768 Clientid:01:52:54:00:81:2d:89}
I0414 16:32:07.228591 157245 main.go:141] libmachine: (addons-411768) DBG | domain addons-411768 has defined IP address 192.168.39.237 and MAC address 52:54:00:81:2d:89 in network mk-addons-411768
I0414 16:32:07.228784 157245 main.go:141] libmachine: (addons-411768) Calling .GetSSHPort
I0414 16:32:07.228967 157245 main.go:141] libmachine: (addons-411768) Calling .GetSSHKeyPath
I0414 16:32:07.229056 157245 main.go:141] libmachine: (addons-411768) Calling .GetSSHUsername
I0414 16:32:07.229171 157245 sshutil.go:53] new ssh client: &{IP:192.168.39.237 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20349-149500/.minikube/machines/addons-411768/id_rsa Username:docker}
I0414 16:32:07.230056 157245 out.go:177] - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
I0414 16:32:07.231280 157245 out.go:177] - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
I0414 16:32:07.232584 157245 out.go:177] - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
I0414 16:32:07.234202 157245 addons.go:435] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
I0414 16:32:07.234220 157245 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
I0414 16:32:07.234240 157245 main.go:141] libmachine: (addons-411768) Calling .GetSSHHostname
I0414 16:32:07.237573 157245 main.go:141] libmachine: (addons-411768) DBG | domain addons-411768 has defined MAC address 52:54:00:81:2d:89 in network mk-addons-411768
I0414 16:32:07.238511 157245 main.go:141] libmachine: (addons-411768) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:2d:89", ip: ""} in network mk-addons-411768: {Iface:virbr1 ExpiryTime:2025-04-14 17:31:33 +0000 UTC Type:0 Mac:52:54:00:81:2d:89 Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:addons-411768 Clientid:01:52:54:00:81:2d:89}
I0414 16:32:07.238533 157245 main.go:141] libmachine: (addons-411768) DBG | domain addons-411768 has defined IP address 192.168.39.237 and MAC address 52:54:00:81:2d:89 in network mk-addons-411768
I0414 16:32:07.238730 157245 main.go:141] libmachine: (addons-411768) Calling .GetSSHPort
I0414 16:32:07.238913 157245 main.go:141] libmachine: (addons-411768) Calling .GetSSHKeyPath
I0414 16:32:07.239093 157245 main.go:141] libmachine: (addons-411768) Calling .GetSSHUsername
I0414 16:32:07.239107 157245 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43417
I0414 16:32:07.239287 157245 sshutil.go:53] new ssh client: &{IP:192.168.39.237 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20349-149500/.minikube/machines/addons-411768/id_rsa Username:docker}
I0414 16:32:07.239509 157245 main.go:141] libmachine: () Calling .GetVersion
I0414 16:32:07.239916 157245 main.go:141] libmachine: Using API Version 1
I0414 16:32:07.239938 157245 main.go:141] libmachine: () Calling .SetConfigRaw
I0414 16:32:07.240346 157245 main.go:141] libmachine: () Calling .GetMachineName
I0414 16:32:07.240551 157245 main.go:141] libmachine: (addons-411768) Calling .GetState
I0414 16:32:07.242582 157245 main.go:141] libmachine: (addons-411768) Calling .DriverName
I0414 16:32:07.244043 157245 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45889
I0414 16:32:07.244097 157245 out.go:177] - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.31
I0414 16:32:07.244568 157245 main.go:141] libmachine: () Calling .GetVersion
I0414 16:32:07.245058 157245 main.go:141] libmachine: Using API Version 1
I0414 16:32:07.245084 157245 main.go:141] libmachine: () Calling .SetConfigRaw
I0414 16:32:07.245268 157245 addons.go:435] installing /etc/kubernetes/addons/deployment.yaml
I0414 16:32:07.245283 157245 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
I0414 16:32:07.245300 157245 main.go:141] libmachine: (addons-411768) Calling .GetSSHHostname
I0414 16:32:07.245448 157245 main.go:141] libmachine: () Calling .GetMachineName
I0414 16:32:07.245618 157245 main.go:141] libmachine: (addons-411768) Calling .GetState
I0414 16:32:07.247437 157245 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44493
I0414 16:32:07.248054 157245 main.go:141] libmachine: () Calling .GetVersion
I0414 16:32:07.248448 157245 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36205
I0414 16:32:07.248483 157245 main.go:141] libmachine: (addons-411768) Calling .DriverName
I0414 16:32:07.248625 157245 main.go:141] libmachine: Using API Version 1
I0414 16:32:07.248647 157245 main.go:141] libmachine: () Calling .SetConfigRaw
I0414 16:32:07.248721 157245 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
I0414 16:32:07.248736 157245 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
I0414 16:32:07.248753 157245 main.go:141] libmachine: (addons-411768) Calling .GetSSHHostname
I0414 16:32:07.248778 157245 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42823
I0414 16:32:07.249095 157245 main.go:141] libmachine: () Calling .GetMachineName
I0414 16:32:07.249150 157245 main.go:141] libmachine: () Calling .GetVersion
I0414 16:32:07.249553 157245 main.go:141] libmachine: () Calling .GetVersion
I0414 16:32:07.249682 157245 main.go:141] libmachine: (addons-411768) DBG | domain addons-411768 has defined MAC address 52:54:00:81:2d:89 in network mk-addons-411768
I0414 16:32:07.249908 157245 main.go:141] libmachine: (addons-411768) Calling .GetState
I0414 16:32:07.250096 157245 main.go:141] libmachine: (addons-411768) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:2d:89", ip: ""} in network mk-addons-411768: {Iface:virbr1 ExpiryTime:2025-04-14 17:31:33 +0000 UTC Type:0 Mac:52:54:00:81:2d:89 Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:addons-411768 Clientid:01:52:54:00:81:2d:89}
I0414 16:32:07.250114 157245 main.go:141] libmachine: (addons-411768) DBG | domain addons-411768 has defined IP address 192.168.39.237 and MAC address 52:54:00:81:2d:89 in network mk-addons-411768
I0414 16:32:07.250215 157245 main.go:141] libmachine: Using API Version 1
I0414 16:32:07.250233 157245 main.go:141] libmachine: () Calling .SetConfigRaw
I0414 16:32:07.250313 157245 main.go:141] libmachine: (addons-411768) Calling .GetSSHPort
I0414 16:32:07.250447 157245 main.go:141] libmachine: (addons-411768) Calling .GetSSHKeyPath
I0414 16:32:07.250547 157245 main.go:141] libmachine: (addons-411768) Calling .GetSSHUsername
I0414 16:32:07.250645 157245 sshutil.go:53] new ssh client: &{IP:192.168.39.237 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20349-149500/.minikube/machines/addons-411768/id_rsa Username:docker}
I0414 16:32:07.250933 157245 main.go:141] libmachine: () Calling .GetMachineName
I0414 16:32:07.251166 157245 main.go:141] libmachine: Using API Version 1
I0414 16:32:07.251178 157245 main.go:141] libmachine: () Calling .SetConfigRaw
I0414 16:32:07.251231 157245 main.go:141] libmachine: (addons-411768) Calling .GetState
I0414 16:32:07.251689 157245 main.go:141] libmachine: (addons-411768) Calling .DriverName
I0414 16:32:07.251748 157245 main.go:141] libmachine: () Calling .GetMachineName
I0414 16:32:07.251949 157245 main.go:141] libmachine: (addons-411768) Calling .GetState
I0414 16:32:07.252944 157245 main.go:141] libmachine: (addons-411768) DBG | domain addons-411768 has defined MAC address 52:54:00:81:2d:89 in network mk-addons-411768
I0414 16:32:07.253356 157245 out.go:177] - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
I0414 16:32:07.253491 157245 main.go:141] libmachine: (addons-411768) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:2d:89", ip: ""} in network mk-addons-411768: {Iface:virbr1 ExpiryTime:2025-04-14 17:31:33 +0000 UTC Type:0 Mac:52:54:00:81:2d:89 Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:addons-411768 Clientid:01:52:54:00:81:2d:89}
I0414 16:32:07.253511 157245 main.go:141] libmachine: (addons-411768) DBG | domain addons-411768 has defined IP address 192.168.39.237 and MAC address 52:54:00:81:2d:89 in network mk-addons-411768
I0414 16:32:07.253559 157245 main.go:141] libmachine: (addons-411768) Calling .DriverName
I0414 16:32:07.253612 157245 main.go:141] libmachine: (addons-411768) Calling .GetSSHPort
I0414 16:32:07.253755 157245 main.go:141] libmachine: (addons-411768) Calling .GetSSHKeyPath
I0414 16:32:07.253822 157245 main.go:141] libmachine: (addons-411768) Calling .DriverName
I0414 16:32:07.253904 157245 main.go:141] libmachine: (addons-411768) Calling .GetSSHUsername
I0414 16:32:07.254079 157245 sshutil.go:53] new ssh client: &{IP:192.168.39.237 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20349-149500/.minikube/machines/addons-411768/id_rsa Username:docker}
I0414 16:32:07.254626 157245 addons.go:435] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
I0414 16:32:07.254646 157245 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
I0414 16:32:07.254662 157245 main.go:141] libmachine: (addons-411768) Calling .GetSSHHostname
I0414 16:32:07.255226 157245 out.go:177] - Using image docker.io/rancher/local-path-provisioner:v0.0.22
I0414 16:32:07.255367 157245 out.go:177] - Using image docker.io/marcnuri/yakd:0.0.5
I0414 16:32:07.256210 157245 addons.go:435] installing /etc/kubernetes/addons/yakd-ns.yaml
I0414 16:32:07.256230 157245 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
I0414 16:32:07.256248 157245 main.go:141] libmachine: (addons-411768) Calling .GetSSHHostname
I0414 16:32:07.257502 157245 out.go:177] - Using image docker.io/busybox:stable
I0414 16:32:07.257586 157245 main.go:141] libmachine: (addons-411768) DBG | domain addons-411768 has defined MAC address 52:54:00:81:2d:89 in network mk-addons-411768
I0414 16:32:07.258106 157245 main.go:141] libmachine: (addons-411768) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:2d:89", ip: ""} in network mk-addons-411768: {Iface:virbr1 ExpiryTime:2025-04-14 17:31:33 +0000 UTC Type:0 Mac:52:54:00:81:2d:89 Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:addons-411768 Clientid:01:52:54:00:81:2d:89}
I0414 16:32:07.258158 157245 main.go:141] libmachine: (addons-411768) DBG | domain addons-411768 has defined IP address 192.168.39.237 and MAC address 52:54:00:81:2d:89 in network mk-addons-411768
I0414 16:32:07.258374 157245 main.go:141] libmachine: (addons-411768) Calling .GetSSHPort
I0414 16:32:07.258605 157245 main.go:141] libmachine: (addons-411768) Calling .GetSSHKeyPath
I0414 16:32:07.258625 157245 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
I0414 16:32:07.258637 157245 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
I0414 16:32:07.258659 157245 main.go:141] libmachine: (addons-411768) Calling .GetSSHHostname
I0414 16:32:07.258745 157245 main.go:141] libmachine: (addons-411768) Calling .GetSSHUsername
I0414 16:32:07.258912 157245 sshutil.go:53] new ssh client: &{IP:192.168.39.237 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20349-149500/.minikube/machines/addons-411768/id_rsa Username:docker}
I0414 16:32:07.259601 157245 main.go:141] libmachine: (addons-411768) DBG | domain addons-411768 has defined MAC address 52:54:00:81:2d:89 in network mk-addons-411768
I0414 16:32:07.260048 157245 main.go:141] libmachine: (addons-411768) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:2d:89", ip: ""} in network mk-addons-411768: {Iface:virbr1 ExpiryTime:2025-04-14 17:31:33 +0000 UTC Type:0 Mac:52:54:00:81:2d:89 Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:addons-411768 Clientid:01:52:54:00:81:2d:89}
I0414 16:32:07.260074 157245 main.go:141] libmachine: (addons-411768) DBG | domain addons-411768 has defined IP address 192.168.39.237 and MAC address 52:54:00:81:2d:89 in network mk-addons-411768
I0414 16:32:07.260218 157245 main.go:141] libmachine: (addons-411768) Calling .GetSSHPort
W0414 16:32:07.260270 157245 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:36916->192.168.39.237:22: read: connection reset by peer
I0414 16:32:07.260379 157245 retry.go:31] will retry after 256.435878ms: ssh: handshake failed: read tcp 192.168.39.1:36916->192.168.39.237:22: read: connection reset by peer
I0414 16:32:07.260594 157245 main.go:141] libmachine: (addons-411768) Calling .GetSSHKeyPath
I0414 16:32:07.260750 157245 main.go:141] libmachine: (addons-411768) Calling .GetSSHUsername
I0414 16:32:07.260958 157245 sshutil.go:53] new ssh client: &{IP:192.168.39.237 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20349-149500/.minikube/machines/addons-411768/id_rsa Username:docker}
I0414 16:32:07.261482 157245 main.go:141] libmachine: (addons-411768) DBG | domain addons-411768 has defined MAC address 52:54:00:81:2d:89 in network mk-addons-411768
I0414 16:32:07.261902 157245 main.go:141] libmachine: (addons-411768) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:2d:89", ip: ""} in network mk-addons-411768: {Iface:virbr1 ExpiryTime:2025-04-14 17:31:33 +0000 UTC Type:0 Mac:52:54:00:81:2d:89 Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:addons-411768 Clientid:01:52:54:00:81:2d:89}
I0414 16:32:07.261930 157245 main.go:141] libmachine: (addons-411768) DBG | domain addons-411768 has defined IP address 192.168.39.237 and MAC address 52:54:00:81:2d:89 in network mk-addons-411768
I0414 16:32:07.262043 157245 main.go:141] libmachine: (addons-411768) Calling .GetSSHPort
I0414 16:32:07.262205 157245 main.go:141] libmachine: (addons-411768) Calling .GetSSHKeyPath
I0414 16:32:07.262341 157245 main.go:141] libmachine: (addons-411768) Calling .GetSSHUsername
I0414 16:32:07.262477 157245 sshutil.go:53] new ssh client: &{IP:192.168.39.237 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20349-149500/.minikube/machines/addons-411768/id_rsa Username:docker}
I0414 16:32:07.510204 157245 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
I0414 16:32:07.535405 157245 addons.go:435] installing /etc/kubernetes/addons/ig-deployment.yaml
I0414 16:32:07.535428 157245 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (14539 bytes)
I0414 16:32:07.554124 157245 ssh_runner.go:195] Run: sudo systemctl start kubelet
I0414 16:32:07.554180 157245 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^ forward . \/etc\/resolv.conf.*/i \ hosts {\n 192.168.39.1 host.minikube.internal\n fallthrough\n }' -e '/^ errors *$/i \ log' | sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
I0414 16:32:07.590129 157245 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
I0414 16:32:07.597474 157245 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
I0414 16:32:07.649867 157245 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
I0414 16:32:07.681091 157245 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
I0414 16:32:07.758590 157245 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
I0414 16:32:07.769413 157245 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
I0414 16:32:07.785004 157245 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
I0414 16:32:07.785026 157245 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
I0414 16:32:07.801970 157245 addons.go:435] installing /etc/kubernetes/addons/registry-svc.yaml
I0414 16:32:07.801990 157245 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
I0414 16:32:07.901876 157245 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
I0414 16:32:07.905329 157245 addons.go:435] installing /etc/kubernetes/addons/rbac-hostpath.yaml
I0414 16:32:07.905346 157245 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
I0414 16:32:07.959073 157245 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
I0414 16:32:07.959099 157245 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
I0414 16:32:08.000884 157245 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
I0414 16:32:08.000913 157245 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
I0414 16:32:08.003809 157245 addons.go:435] installing /etc/kubernetes/addons/yakd-sa.yaml
I0414 16:32:08.003828 157245 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
I0414 16:32:08.057722 157245 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
I0414 16:32:08.080964 157245 addons.go:435] installing /etc/kubernetes/addons/registry-proxy.yaml
I0414 16:32:08.080985 157245 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
I0414 16:32:08.152996 157245 addons.go:435] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
I0414 16:32:08.153022 157245 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
I0414 16:32:08.237587 157245 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
I0414 16:32:08.237623 157245 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
I0414 16:32:08.278538 157245 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
I0414 16:32:08.278567 157245 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
I0414 16:32:08.309584 157245 addons.go:435] installing /etc/kubernetes/addons/yakd-crb.yaml
I0414 16:32:08.309611 157245 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
I0414 16:32:08.351902 157245 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
I0414 16:32:08.360705 157245 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
I0414 16:32:08.360724 157245 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
I0414 16:32:08.399688 157245 addons.go:435] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
I0414 16:32:08.399711 157245 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
I0414 16:32:08.587012 157245 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
I0414 16:32:08.592210 157245 addons.go:435] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
I0414 16:32:08.592232 157245 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
I0414 16:32:08.604465 157245 addons.go:435] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
I0414 16:32:08.604483 157245 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
I0414 16:32:08.609658 157245 addons.go:435] installing /etc/kubernetes/addons/yakd-svc.yaml
I0414 16:32:08.609676 157245 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
I0414 16:32:08.816206 157245 addons.go:435] installing /etc/kubernetes/addons/yakd-dp.yaml
I0414 16:32:08.816228 157245 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
I0414 16:32:08.899853 157245 addons.go:435] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
I0414 16:32:08.899885 157245 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
I0414 16:32:08.947672 157245 addons.go:435] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
I0414 16:32:08.947697 157245 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
I0414 16:32:09.017802 157245 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
I0414 16:32:09.179922 157245 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (1.669675177s)
I0414 16:32:09.179969 157245 main.go:141] libmachine: Making call to close driver server
I0414 16:32:09.179982 157245 main.go:141] libmachine: (addons-411768) Calling .Close
I0414 16:32:09.180348 157245 main.go:141] libmachine: Successfully made call to close driver server
I0414 16:32:09.180371 157245 main.go:141] libmachine: Making call to close connection to plugin binary
I0414 16:32:09.180426 157245 main.go:141] libmachine: (addons-411768) DBG | Closing plugin on server side
I0414 16:32:09.180457 157245 main.go:141] libmachine: Making call to close driver server
I0414 16:32:09.180470 157245 main.go:141] libmachine: (addons-411768) Calling .Close
I0414 16:32:09.180779 157245 main.go:141] libmachine: (addons-411768) DBG | Closing plugin on server side
I0414 16:32:09.180779 157245 main.go:141] libmachine: Successfully made call to close driver server
I0414 16:32:09.180810 157245 main.go:141] libmachine: Making call to close connection to plugin binary
I0414 16:32:09.255913 157245 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
I0414 16:32:09.255935 157245 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
I0414 16:32:09.275044 157245 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
I0414 16:32:09.425145 157245 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
I0414 16:32:09.425176 157245 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
I0414 16:32:09.695982 157245 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
I0414 16:32:09.696008 157245 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
I0414 16:32:09.975776 157245 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
I0414 16:32:09.975807 157245 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
I0414 16:32:10.172563 157245 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^ forward . \/etc\/resolv.conf.*/i \ hosts {\n 192.168.39.1 host.minikube.internal\n fallthrough\n }' -e '/^ errors *$/i \ log' | sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.618340999s)
I0414 16:32:10.172608 157245 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
I0414 16:32:10.172610 157245 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (2.618454484s)
I0414 16:32:10.173257 157245 node_ready.go:35] waiting up to 6m0s for node "addons-411768" to be "Ready" ...
I0414 16:32:10.181456 157245 node_ready.go:49] node "addons-411768" has status "Ready":"True"
I0414 16:32:10.181473 157245 node_ready.go:38] duration metric: took 8.190521ms for node "addons-411768" to be "Ready" ...
I0414 16:32:10.181481 157245 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0414 16:32:10.197447 157245 pod_ready.go:79] waiting up to 6m0s for pod "amd-gpu-device-plugin-5sprs" in "kube-system" namespace to be "Ready" ...
I0414 16:32:10.476856 157245 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
I0414 16:32:10.476890 157245 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
I0414 16:32:10.694797 157245 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-411768" context rescaled to 1 replicas
I0414 16:32:10.812006 157245 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
I0414 16:32:11.763503 157245 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (4.173337705s)
I0414 16:32:11.763552 157245 main.go:141] libmachine: Making call to close driver server
I0414 16:32:11.763564 157245 main.go:141] libmachine: (addons-411768) Calling .Close
I0414 16:32:11.763929 157245 main.go:141] libmachine: Successfully made call to close driver server
I0414 16:32:11.764027 157245 main.go:141] libmachine: Making call to close connection to plugin binary
I0414 16:32:11.764045 157245 main.go:141] libmachine: Making call to close driver server
I0414 16:32:11.764052 157245 main.go:141] libmachine: (addons-411768) DBG | Closing plugin on server side
I0414 16:32:11.764056 157245 main.go:141] libmachine: (addons-411768) Calling .Close
I0414 16:32:11.764342 157245 main.go:141] libmachine: (addons-411768) DBG | Closing plugin on server side
I0414 16:32:11.764386 157245 main.go:141] libmachine: Successfully made call to close driver server
I0414 16:32:11.764416 157245 main.go:141] libmachine: Making call to close connection to plugin binary
I0414 16:32:12.214485 157245 pod_ready.go:103] pod "amd-gpu-device-plugin-5sprs" in "kube-system" namespace has status "Ready":"False"
I0414 16:32:14.103020 157245 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
I0414 16:32:14.103060 157245 main.go:141] libmachine: (addons-411768) Calling .GetSSHHostname
I0414 16:32:14.106672 157245 main.go:141] libmachine: (addons-411768) DBG | domain addons-411768 has defined MAC address 52:54:00:81:2d:89 in network mk-addons-411768
I0414 16:32:14.107169 157245 main.go:141] libmachine: (addons-411768) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:2d:89", ip: ""} in network mk-addons-411768: {Iface:virbr1 ExpiryTime:2025-04-14 17:31:33 +0000 UTC Type:0 Mac:52:54:00:81:2d:89 Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:addons-411768 Clientid:01:52:54:00:81:2d:89}
I0414 16:32:14.107212 157245 main.go:141] libmachine: (addons-411768) DBG | domain addons-411768 has defined IP address 192.168.39.237 and MAC address 52:54:00:81:2d:89 in network mk-addons-411768
I0414 16:32:14.107372 157245 main.go:141] libmachine: (addons-411768) Calling .GetSSHPort
I0414 16:32:14.107542 157245 main.go:141] libmachine: (addons-411768) Calling .GetSSHKeyPath
I0414 16:32:14.107674 157245 main.go:141] libmachine: (addons-411768) Calling .GetSSHUsername
I0414 16:32:14.107803 157245 sshutil.go:53] new ssh client: &{IP:192.168.39.237 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20349-149500/.minikube/machines/addons-411768/id_rsa Username:docker}
I0414 16:32:14.256447 157245 pod_ready.go:103] pod "amd-gpu-device-plugin-5sprs" in "kube-system" namespace has status "Ready":"False"
I0414 16:32:14.510568 157245 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
I0414 16:32:14.541375 157245 addons.go:238] Setting addon gcp-auth=true in "addons-411768"
I0414 16:32:14.541425 157245 host.go:66] Checking if "addons-411768" exists ...
I0414 16:32:14.541750 157245 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0414 16:32:14.541776 157245 main.go:141] libmachine: Launching plugin server for driver kvm2
I0414 16:32:14.557041 157245 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32787
I0414 16:32:14.557516 157245 main.go:141] libmachine: () Calling .GetVersion
I0414 16:32:14.557977 157245 main.go:141] libmachine: Using API Version 1
I0414 16:32:14.558000 157245 main.go:141] libmachine: () Calling .SetConfigRaw
I0414 16:32:14.558349 157245 main.go:141] libmachine: () Calling .GetMachineName
I0414 16:32:14.558968 157245 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0414 16:32:14.559036 157245 main.go:141] libmachine: Launching plugin server for driver kvm2
I0414 16:32:14.574691 157245 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36379
I0414 16:32:14.575221 157245 main.go:141] libmachine: () Calling .GetVersion
I0414 16:32:14.575692 157245 main.go:141] libmachine: Using API Version 1
I0414 16:32:14.575711 157245 main.go:141] libmachine: () Calling .SetConfigRaw
I0414 16:32:14.576075 157245 main.go:141] libmachine: () Calling .GetMachineName
I0414 16:32:14.576279 157245 main.go:141] libmachine: (addons-411768) Calling .GetState
I0414 16:32:14.577929 157245 main.go:141] libmachine: (addons-411768) Calling .DriverName
I0414 16:32:14.578171 157245 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
I0414 16:32:14.578199 157245 main.go:141] libmachine: (addons-411768) Calling .GetSSHHostname
I0414 16:32:14.580687 157245 main.go:141] libmachine: (addons-411768) DBG | domain addons-411768 has defined MAC address 52:54:00:81:2d:89 in network mk-addons-411768
I0414 16:32:14.581036 157245 main.go:141] libmachine: (addons-411768) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:81:2d:89", ip: ""} in network mk-addons-411768: {Iface:virbr1 ExpiryTime:2025-04-14 17:31:33 +0000 UTC Type:0 Mac:52:54:00:81:2d:89 Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:addons-411768 Clientid:01:52:54:00:81:2d:89}
I0414 16:32:14.581073 157245 main.go:141] libmachine: (addons-411768) DBG | domain addons-411768 has defined IP address 192.168.39.237 and MAC address 52:54:00:81:2d:89 in network mk-addons-411768
I0414 16:32:14.581207 157245 main.go:141] libmachine: (addons-411768) Calling .GetSSHPort
I0414 16:32:14.581385 157245 main.go:141] libmachine: (addons-411768) Calling .GetSSHKeyPath
I0414 16:32:14.581540 157245 main.go:141] libmachine: (addons-411768) Calling .GetSSHUsername
I0414 16:32:14.581712 157245 sshutil.go:53] new ssh client: &{IP:192.168.39.237 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20349-149500/.minikube/machines/addons-411768/id_rsa Username:docker}
I0414 16:32:15.941800 157245 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (8.344287225s)
I0414 16:32:15.941856 157245 main.go:141] libmachine: Making call to close driver server
I0414 16:32:15.941876 157245 main.go:141] libmachine: (addons-411768) Calling .Close
I0414 16:32:15.941938 157245 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (8.2920377s)
I0414 16:32:15.942010 157245 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (8.260894337s)
I0414 16:32:15.942029 157245 main.go:141] libmachine: Making call to close driver server
I0414 16:32:15.942040 157245 main.go:141] libmachine: Making call to close driver server
I0414 16:32:15.942053 157245 main.go:141] libmachine: (addons-411768) Calling .Close
I0414 16:32:15.942042 157245 main.go:141] libmachine: (addons-411768) Calling .Close
I0414 16:32:15.942098 157245 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (8.183462668s)
I0414 16:32:15.942132 157245 main.go:141] libmachine: Making call to close driver server
I0414 16:32:15.942142 157245 main.go:141] libmachine: (addons-411768) Calling .Close
I0414 16:32:15.942201 157245 main.go:141] libmachine: Successfully made call to close driver server
I0414 16:32:15.942216 157245 main.go:141] libmachine: Making call to close connection to plugin binary
I0414 16:32:15.942231 157245 main.go:141] libmachine: Making call to close driver server
I0414 16:32:15.942243 157245 main.go:141] libmachine: (addons-411768) Calling .Close
I0414 16:32:15.942273 157245 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (8.17283585s)
I0414 16:32:15.942299 157245 main.go:141] libmachine: Making call to close driver server
I0414 16:32:15.942311 157245 main.go:141] libmachine: (addons-411768) Calling .Close
I0414 16:32:15.942366 157245 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (8.040465575s)
I0414 16:32:15.942381 157245 main.go:141] libmachine: Making call to close driver server
I0414 16:32:15.942388 157245 main.go:141] libmachine: (addons-411768) Calling .Close
I0414 16:32:15.942405 157245 main.go:141] libmachine: (addons-411768) DBG | Closing plugin on server side
I0414 16:32:15.942429 157245 main.go:141] libmachine: (addons-411768) DBG | Closing plugin on server side
I0414 16:32:15.942451 157245 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (7.884706718s)
I0414 16:32:15.942458 157245 main.go:141] libmachine: Successfully made call to close driver server
I0414 16:32:15.942460 157245 main.go:141] libmachine: (addons-411768) DBG | Closing plugin on server side
I0414 16:32:15.942466 157245 main.go:141] libmachine: Making call to close driver server
I0414 16:32:15.942467 157245 main.go:141] libmachine: Making call to close connection to plugin binary
I0414 16:32:15.942474 157245 main.go:141] libmachine: (addons-411768) Calling .Close
I0414 16:32:15.942478 157245 main.go:141] libmachine: Making call to close driver server
I0414 16:32:15.942482 157245 main.go:141] libmachine: Successfully made call to close driver server
I0414 16:32:15.942485 157245 main.go:141] libmachine: (addons-411768) Calling .Close
I0414 16:32:15.942489 157245 main.go:141] libmachine: Making call to close connection to plugin binary
I0414 16:32:15.942499 157245 addons.go:479] Verifying addon ingress=true in "addons-411768"
I0414 16:32:15.942531 157245 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (7.590606213s)
I0414 16:32:15.942553 157245 main.go:141] libmachine: Making call to close driver server
I0414 16:32:15.942571 157245 main.go:141] libmachine: (addons-411768) Calling .Close
I0414 16:32:15.942686 157245 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (7.355641837s)
I0414 16:32:15.942700 157245 main.go:141] libmachine: Making call to close driver server
I0414 16:32:15.942707 157245 main.go:141] libmachine: (addons-411768) Calling .Close
I0414 16:32:15.942722 157245 main.go:141] libmachine: (addons-411768) DBG | Closing plugin on server side
I0414 16:32:15.942753 157245 main.go:141] libmachine: Successfully made call to close driver server
I0414 16:32:15.942761 157245 main.go:141] libmachine: Making call to close connection to plugin binary
I0414 16:32:15.942778 157245 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (6.924946651s)
I0414 16:32:15.942791 157245 main.go:141] libmachine: Making call to close driver server
I0414 16:32:15.942792 157245 main.go:141] libmachine: (addons-411768) DBG | Closing plugin on server side
I0414 16:32:15.942800 157245 main.go:141] libmachine: (addons-411768) Calling .Close
I0414 16:32:15.942823 157245 main.go:141] libmachine: Successfully made call to close driver server
I0414 16:32:15.942830 157245 main.go:141] libmachine: Making call to close connection to plugin binary
I0414 16:32:15.942838 157245 main.go:141] libmachine: Making call to close driver server
I0414 16:32:15.942845 157245 main.go:141] libmachine: (addons-411768) Calling .Close
I0414 16:32:15.942917 157245 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (6.667841574s)
W0414 16:32:15.942972 157245 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
stdout:
customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
serviceaccount/snapshot-controller created
clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
deployment.apps/snapshot-controller created
stderr:
error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
ensure CRDs are installed first
I0414 16:32:15.942998 157245 main.go:141] libmachine: (addons-411768) DBG | Closing plugin on server side
I0414 16:32:15.943004 157245 retry.go:31] will retry after 326.879938ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
stdout:
customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
serviceaccount/snapshot-controller created
clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
deployment.apps/snapshot-controller created
stderr:
error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
ensure CRDs are installed first
I0414 16:32:15.943028 157245 main.go:141] libmachine: Successfully made call to close driver server
I0414 16:32:15.943036 157245 main.go:141] libmachine: Making call to close connection to plugin binary
I0414 16:32:15.943043 157245 main.go:141] libmachine: Making call to close driver server
I0414 16:32:15.943068 157245 main.go:141] libmachine: Successfully made call to close driver server
I0414 16:32:15.943080 157245 main.go:141] libmachine: Making call to close connection to plugin binary
I0414 16:32:15.943097 157245 main.go:141] libmachine: Making call to close driver server
I0414 16:32:15.943103 157245 main.go:141] libmachine: (addons-411768) Calling .Close
I0414 16:32:15.943141 157245 main.go:141] libmachine: (addons-411768) Calling .Close
I0414 16:32:15.943045 157245 main.go:141] libmachine: (addons-411768) DBG | Closing plugin on server side
I0414 16:32:15.943310 157245 main.go:141] libmachine: (addons-411768) DBG | Closing plugin on server side
I0414 16:32:15.943336 157245 main.go:141] libmachine: Successfully made call to close driver server
I0414 16:32:15.943343 157245 main.go:141] libmachine: Making call to close connection to plugin binary
I0414 16:32:15.944483 157245 main.go:141] libmachine: (addons-411768) DBG | Closing plugin on server side
I0414 16:32:15.944510 157245 main.go:141] libmachine: Successfully made call to close driver server
I0414 16:32:15.944518 157245 main.go:141] libmachine: Making call to close connection to plugin binary
I0414 16:32:15.944526 157245 main.go:141] libmachine: Making call to close driver server
I0414 16:32:15.944533 157245 main.go:141] libmachine: (addons-411768) Calling .Close
I0414 16:32:15.944598 157245 main.go:141] libmachine: (addons-411768) DBG | Closing plugin on server side
I0414 16:32:15.944615 157245 main.go:141] libmachine: Successfully made call to close driver server
I0414 16:32:15.944621 157245 main.go:141] libmachine: Making call to close connection to plugin binary
I0414 16:32:15.944627 157245 main.go:141] libmachine: Making call to close driver server
I0414 16:32:15.944633 157245 main.go:141] libmachine: (addons-411768) Calling .Close
I0414 16:32:15.944665 157245 main.go:141] libmachine: (addons-411768) DBG | Closing plugin on server side
I0414 16:32:15.944680 157245 main.go:141] libmachine: Successfully made call to close driver server
I0414 16:32:15.944686 157245 main.go:141] libmachine: Making call to close connection to plugin binary
I0414 16:32:15.944844 157245 main.go:141] libmachine: (addons-411768) DBG | Closing plugin on server side
I0414 16:32:15.944864 157245 main.go:141] libmachine: Successfully made call to close driver server
I0414 16:32:15.944881 157245 main.go:141] libmachine: Making call to close connection to plugin binary
I0414 16:32:15.944888 157245 main.go:141] libmachine: Making call to close driver server
I0414 16:32:15.944894 157245 main.go:141] libmachine: (addons-411768) Calling .Close
I0414 16:32:15.944935 157245 main.go:141] libmachine: (addons-411768) DBG | Closing plugin on server side
I0414 16:32:15.944953 157245 main.go:141] libmachine: Successfully made call to close driver server
I0414 16:32:15.944959 157245 main.go:141] libmachine: Making call to close connection to plugin binary
I0414 16:32:15.945197 157245 main.go:141] libmachine: (addons-411768) DBG | Closing plugin on server side
I0414 16:32:15.945203 157245 main.go:141] libmachine: Successfully made call to close driver server
I0414 16:32:15.945218 157245 main.go:141] libmachine: Making call to close connection to plugin binary
I0414 16:32:15.945227 157245 main.go:141] libmachine: Making call to close driver server
I0414 16:32:15.945229 157245 main.go:141] libmachine: Successfully made call to close driver server
I0414 16:32:15.945235 157245 main.go:141] libmachine: (addons-411768) Calling .Close
I0414 16:32:15.945245 157245 main.go:141] libmachine: Making call to close connection to plugin binary
I0414 16:32:15.945599 157245 main.go:141] libmachine: (addons-411768) DBG | Closing plugin on server side
I0414 16:32:15.945628 157245 main.go:141] libmachine: Successfully made call to close driver server
I0414 16:32:15.945635 157245 main.go:141] libmachine: Making call to close connection to plugin binary
I0414 16:32:15.946225 157245 out.go:177] * Verifying ingress addon...
I0414 16:32:15.947010 157245 main.go:141] libmachine: (addons-411768) DBG | Closing plugin on server side
I0414 16:32:15.947042 157245 main.go:141] libmachine: Successfully made call to close driver server
I0414 16:32:15.947049 157245 main.go:141] libmachine: Making call to close connection to plugin binary
I0414 16:32:15.947057 157245 addons.go:479] Verifying addon metrics-server=true in "addons-411768"
I0414 16:32:15.947233 157245 main.go:141] libmachine: Successfully made call to close driver server
I0414 16:32:15.947240 157245 main.go:141] libmachine: Making call to close connection to plugin binary
I0414 16:32:15.947248 157245 main.go:141] libmachine: Making call to close driver server
I0414 16:32:15.947254 157245 main.go:141] libmachine: (addons-411768) Calling .Close
I0414 16:32:15.947967 157245 main.go:141] libmachine: (addons-411768) DBG | Closing plugin on server side
I0414 16:32:15.948005 157245 main.go:141] libmachine: Successfully made call to close driver server
I0414 16:32:15.948013 157245 main.go:141] libmachine: Making call to close connection to plugin binary
I0414 16:32:15.948021 157245 addons.go:479] Verifying addon registry=true in "addons-411768"
I0414 16:32:15.948543 157245 main.go:141] libmachine: (addons-411768) DBG | Closing plugin on server side
I0414 16:32:15.948575 157245 main.go:141] libmachine: Successfully made call to close driver server
I0414 16:32:15.948583 157245 main.go:141] libmachine: Making call to close connection to plugin binary
I0414 16:32:15.948870 157245 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
I0414 16:32:15.949095 157245 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
minikube -p addons-411768 service yakd-dashboard -n yakd-dashboard
I0414 16:32:15.949141 157245 out.go:177] * Verifying registry addon...
I0414 16:32:15.951202 157245 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
I0414 16:32:15.963853 157245 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
I0414 16:32:15.963876 157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0414 16:32:15.963979 157245 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
I0414 16:32:15.963998 157245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0414 16:32:15.983788 157245 main.go:141] libmachine: Making call to close driver server
I0414 16:32:15.983807 157245 main.go:141] libmachine: (addons-411768) Calling .Close
I0414 16:32:15.984050 157245 main.go:141] libmachine: Successfully made call to close driver server
I0414 16:32:15.984084 157245 main.go:141] libmachine: Making call to close connection to plugin binary
W0414 16:32:15.984204 157245 out.go:270] ! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class local-path as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
I0414 16:32:15.990288 157245 main.go:141] libmachine: Making call to close driver server
I0414 16:32:15.990304 157245 main.go:141] libmachine: (addons-411768) Calling .Close
I0414 16:32:15.990587 157245 main.go:141] libmachine: Successfully made call to close driver server
I0414 16:32:15.990607 157245 main.go:141] libmachine: (addons-411768) DBG | Closing plugin on server side
I0414 16:32:15.990608 157245 main.go:141] libmachine: Making call to close connection to plugin binary
I0414 16:32:16.271070 157245 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
I0414 16:32:16.452769 157245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0414 16:32:16.455283 157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0414 16:32:16.710065 157245 pod_ready.go:103] pod "amd-gpu-device-plugin-5sprs" in "kube-system" namespace has status "Ready":"False"
I0414 16:32:16.958015 157245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0414 16:32:16.958454 157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0414 16:32:17.438594 157245 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (6.62650454s)
I0414 16:32:17.438672 157245 main.go:141] libmachine: Making call to close driver server
I0414 16:32:17.438680 157245 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.860481626s)
I0414 16:32:17.438694 157245 main.go:141] libmachine: (addons-411768) Calling .Close
I0414 16:32:17.438991 157245 main.go:141] libmachine: (addons-411768) DBG | Closing plugin on server side
I0414 16:32:17.439056 157245 main.go:141] libmachine: Successfully made call to close driver server
I0414 16:32:17.439077 157245 main.go:141] libmachine: Making call to close connection to plugin binary
I0414 16:32:17.439094 157245 main.go:141] libmachine: Making call to close driver server
I0414 16:32:17.439104 157245 main.go:141] libmachine: (addons-411768) Calling .Close
I0414 16:32:17.439344 157245 main.go:141] libmachine: Successfully made call to close driver server
I0414 16:32:17.439390 157245 main.go:141] libmachine: Making call to close connection to plugin binary
I0414 16:32:17.439413 157245 addons.go:479] Verifying addon csi-hostpath-driver=true in "addons-411768"
I0414 16:32:17.439431 157245 main.go:141] libmachine: (addons-411768) DBG | Closing plugin on server side
I0414 16:32:17.439788 157245 out.go:177] - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
I0414 16:32:17.441340 157245 out.go:177] * Verifying csi-hostpath-driver addon...
I0414 16:32:17.442480 157245 out.go:177] - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
I0414 16:32:17.443376 157245 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I0414 16:32:17.443402 157245 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
I0414 16:32:17.443419 157245 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
I0414 16:32:17.454265 157245 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I0414 16:32:17.454280 157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0414 16:32:17.476839 157245 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-service.yaml
I0414 16:32:17.476862 157245 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
I0414 16:32:17.488320 157245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0414 16:32:17.488336 157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0414 16:32:17.622440 157245 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
I0414 16:32:17.622467 157245 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
I0414 16:32:17.674372 157245 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
I0414 16:32:17.948045 157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0414 16:32:17.951253 157245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0414 16:32:17.953595 157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0414 16:32:18.031475 157245 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.760352096s)
I0414 16:32:18.031539 157245 main.go:141] libmachine: Making call to close driver server
I0414 16:32:18.031568 157245 main.go:141] libmachine: (addons-411768) Calling .Close
I0414 16:32:18.031852 157245 main.go:141] libmachine: Successfully made call to close driver server
I0414 16:32:18.031872 157245 main.go:141] libmachine: Making call to close connection to plugin binary
I0414 16:32:18.031881 157245 main.go:141] libmachine: Making call to close driver server
I0414 16:32:18.031889 157245 main.go:141] libmachine: (addons-411768) Calling .Close
I0414 16:32:18.032212 157245 main.go:141] libmachine: (addons-411768) DBG | Closing plugin on server side
I0414 16:32:18.032226 157245 main.go:141] libmachine: Successfully made call to close driver server
I0414 16:32:18.032241 157245 main.go:141] libmachine: Making call to close connection to plugin binary
I0414 16:32:18.448339 157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0414 16:32:18.451605 157245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0414 16:32:18.453396 157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0414 16:32:18.839174 157245 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.164760866s)
I0414 16:32:18.839229 157245 main.go:141] libmachine: Making call to close driver server
I0414 16:32:18.839246 157245 main.go:141] libmachine: (addons-411768) Calling .Close
I0414 16:32:18.839543 157245 main.go:141] libmachine: Successfully made call to close driver server
I0414 16:32:18.839562 157245 main.go:141] libmachine: Making call to close connection to plugin binary
I0414 16:32:18.839571 157245 main.go:141] libmachine: Making call to close driver server
I0414 16:32:18.839578 157245 main.go:141] libmachine: (addons-411768) Calling .Close
I0414 16:32:18.839578 157245 main.go:141] libmachine: (addons-411768) DBG | Closing plugin on server side
I0414 16:32:18.839856 157245 main.go:141] libmachine: (addons-411768) DBG | Closing plugin on server side
I0414 16:32:18.840836 157245 main.go:141] libmachine: Successfully made call to close driver server
I0414 16:32:18.840877 157245 main.go:141] libmachine: Making call to close connection to plugin binary
I0414 16:32:18.841898 157245 addons.go:479] Verifying addon gcp-auth=true in "addons-411768"
I0414 16:32:18.843365 157245 out.go:177] * Verifying gcp-auth addon...
I0414 16:32:18.845436 157245 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
I0414 16:32:18.864478 157245 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
I0414 16:32:18.864493 157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0414 16:32:18.951850 157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0414 16:32:18.952805 157245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0414 16:32:18.960513 157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0414 16:32:19.207080 157245 pod_ready.go:103] pod "amd-gpu-device-plugin-5sprs" in "kube-system" namespace has status "Ready":"False"
I0414 16:32:19.351692 157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0414 16:32:19.454048 157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0414 16:32:19.454184 157245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0414 16:32:19.457030 157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0414 16:32:19.849673 157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0414 16:32:19.952297 157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0414 16:32:19.952507 157245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0414 16:32:19.956382 157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0414 16:32:20.354981 157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0414 16:32:20.453532 157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0414 16:32:20.453589 157245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0414 16:32:20.454660 157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0414 16:32:20.849015 157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0414 16:32:20.949921 157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0414 16:32:20.952463 157245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0414 16:32:20.955682 157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0414 16:32:21.350320 157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0414 16:32:21.448183 157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0414 16:32:21.451349 157245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0414 16:32:21.453773 157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0414 16:32:21.703126 157245 pod_ready.go:103] pod "amd-gpu-device-plugin-5sprs" in "kube-system" namespace has status "Ready":"False"
I0414 16:32:21.848988 157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0414 16:32:21.947094 157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0414 16:32:21.952588 157245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0414 16:32:21.956537 157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0414 16:32:22.349307 157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0414 16:32:22.448495 157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0414 16:32:22.455383 157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0414 16:32:22.456667 157245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0414 16:32:22.850685 157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0414 16:32:22.951804 157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0414 16:32:22.953054 157245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0414 16:32:22.954477 157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0414 16:32:23.348393 157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0414 16:32:23.831401 157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0414 16:32:23.831612 157245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0414 16:32:23.831616 157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0414 16:32:23.833655 157245 pod_ready.go:103] pod "amd-gpu-device-plugin-5sprs" in "kube-system" namespace has status "Ready":"False"
I0414 16:32:23.848344 157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0414 16:32:23.947383 157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0414 16:32:23.951562 157245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0414 16:32:23.954147 157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0414 16:32:24.348473 157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0414 16:32:24.446610 157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0414 16:32:24.451762 157245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0414 16:32:24.453234 157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0414 16:32:24.849156 157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0414 16:32:24.949875 157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0414 16:32:24.953181 157245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0414 16:32:24.956772 157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0414 16:32:25.348813 157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0414 16:32:25.446798 157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0414 16:32:25.451886 157245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0414 16:32:25.453640 157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0414 16:32:25.848872 157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0414 16:32:25.950188 157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0414 16:32:25.953238 157245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0414 16:32:25.955142 157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0414 16:32:26.203268 157245 pod_ready.go:103] pod "amd-gpu-device-plugin-5sprs" in "kube-system" namespace has status "Ready":"False"
I0414 16:32:26.349341 157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0414 16:32:26.447357 157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0414 16:32:26.451683 157245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0414 16:32:26.453211 157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0414 16:32:26.849230 157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0414 16:32:27.015308 157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0414 16:32:27.015394 157245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0414 16:32:27.015443 157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0414 16:32:27.349136 157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0414 16:32:27.449512 157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0414 16:32:27.451610 157245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0414 16:32:27.453332 157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0414 16:32:27.848909 157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0414 16:32:27.948032 157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0414 16:32:27.951917 157245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0414 16:32:27.953956 157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0414 16:32:28.203759 157245 pod_ready.go:103] pod "amd-gpu-device-plugin-5sprs" in "kube-system" namespace has status "Ready":"False"
I0414 16:32:28.349313 157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0414 16:32:28.447617 157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0414 16:32:28.451870 157245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0414 16:32:28.453350 157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0414 16:32:28.848443 157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0414 16:32:28.946954 157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0414 16:32:28.952351 157245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0414 16:32:28.954395 157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0414 16:32:29.348655 157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0414 16:32:29.446252 157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0414 16:32:29.451263 157245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0414 16:32:29.453170 157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0414 16:32:29.848582 157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0414 16:32:29.946846 157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0414 16:32:29.951988 157245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0414 16:32:29.953548 157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0414 16:32:30.353521 157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0414 16:32:30.446738 157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0414 16:32:30.452330 157245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0414 16:32:30.453982 157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0414 16:32:30.703364 157245 pod_ready.go:103] pod "amd-gpu-device-plugin-5sprs" in "kube-system" namespace has status "Ready":"False"
I0414 16:32:30.848685 157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0414 16:32:30.946699 157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0414 16:32:30.951776 157245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0414 16:32:30.953401 157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0414 16:32:31.348906 157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0414 16:32:31.449728 157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0414 16:32:31.451427 157245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0414 16:32:31.453114 157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0414 16:32:31.849108 157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0414 16:32:31.947377 157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0414 16:32:31.951326 157245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0414 16:32:31.953162 157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0414 16:32:32.348980 157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0414 16:32:32.446915 157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0414 16:32:32.452697 157245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0414 16:32:32.454386 157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0414 16:32:32.849366 157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0414 16:32:32.948655 157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0414 16:32:32.952217 157245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0414 16:32:32.953750 157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0414 16:32:33.538563 157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0414 16:32:33.538620 157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0414 16:32:33.538621 157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0414 16:32:33.538892 157245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0414 16:32:33.539710 157245 pod_ready.go:103] pod "amd-gpu-device-plugin-5sprs" in "kube-system" namespace has status "Ready":"False"
I0414 16:32:33.848696 157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0414 16:32:33.947425 157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0414 16:32:33.954704 157245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0414 16:32:33.955329 157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0414 16:32:34.349380 157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0414 16:32:34.447237 157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0414 16:32:34.451437 157245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0414 16:32:34.453258 157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0414 16:32:34.848112 157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0414 16:32:34.946966 157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0414 16:32:34.950998 157245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0414 16:32:34.954352 157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0414 16:32:35.348595 157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0414 16:32:35.447142 157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0414 16:32:35.451238 157245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0414 16:32:35.454279 157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0414 16:32:35.702474 157245 pod_ready.go:103] pod "amd-gpu-device-plugin-5sprs" in "kube-system" namespace has status "Ready":"False"
I0414 16:32:35.848824 157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0414 16:32:35.950254 157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0414 16:32:35.952491 157245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0414 16:32:35.954402 157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0414 16:32:36.359946 157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0414 16:32:36.447210 157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0414 16:32:36.451285 157245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0414 16:32:36.454557 157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0414 16:32:36.848742 157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0414 16:32:36.948697 157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0414 16:32:36.954774 157245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0414 16:32:36.956451 157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0414 16:32:37.348672 157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0414 16:32:37.448857 157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0414 16:32:37.452195 157245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0414 16:32:37.454174 157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0414 16:32:37.703854 157245 pod_ready.go:103] pod "amd-gpu-device-plugin-5sprs" in "kube-system" namespace has status "Ready":"False"
I0414 16:32:37.848946 157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0414 16:32:37.947084 157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0414 16:32:37.951369 157245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0414 16:32:37.953671 157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0414 16:32:38.348432 157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0414 16:32:38.446857 157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0414 16:32:38.455079 157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0414 16:32:38.455767 157245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0414 16:32:38.757296 157245 pod_ready.go:93] pod "amd-gpu-device-plugin-5sprs" in "kube-system" namespace has status "Ready":"True"
I0414 16:32:38.757322 157245 pod_ready.go:82] duration metric: took 28.559850903s for pod "amd-gpu-device-plugin-5sprs" in "kube-system" namespace to be "Ready" ...
I0414 16:32:38.757332 157245 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-4wbtn" in "kube-system" namespace to be "Ready" ...
I0414 16:32:38.772514 157245 pod_ready.go:93] pod "coredns-668d6bf9bc-4wbtn" in "kube-system" namespace has status "Ready":"True"
I0414 16:32:38.772538 157245 pod_ready.go:82] duration metric: took 15.200209ms for pod "coredns-668d6bf9bc-4wbtn" in "kube-system" namespace to be "Ready" ...
I0414 16:32:38.772547 157245 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-qxz94" in "kube-system" namespace to be "Ready" ...
I0414 16:32:38.776123 157245 pod_ready.go:98] error getting pod "coredns-668d6bf9bc-qxz94" in "kube-system" namespace (skipping!): pods "coredns-668d6bf9bc-qxz94" not found
I0414 16:32:38.776157 157245 pod_ready.go:82] duration metric: took 3.601331ms for pod "coredns-668d6bf9bc-qxz94" in "kube-system" namespace to be "Ready" ...
E0414 16:32:38.776170 157245 pod_ready.go:67] WaitExtra: waitPodCondition: error getting pod "coredns-668d6bf9bc-qxz94" in "kube-system" namespace (skipping!): pods "coredns-668d6bf9bc-qxz94" not found
I0414 16:32:38.776180 157245 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-411768" in "kube-system" namespace to be "Ready" ...
I0414 16:32:38.781768 157245 pod_ready.go:93] pod "etcd-addons-411768" in "kube-system" namespace has status "Ready":"True"
I0414 16:32:38.781787 157245 pod_ready.go:82] duration metric: took 5.596909ms for pod "etcd-addons-411768" in "kube-system" namespace to be "Ready" ...
I0414 16:32:38.781795 157245 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-411768" in "kube-system" namespace to be "Ready" ...
I0414 16:32:38.790734 157245 pod_ready.go:93] pod "kube-apiserver-addons-411768" in "kube-system" namespace has status "Ready":"True"
I0414 16:32:38.790755 157245 pod_ready.go:82] duration metric: took 8.954041ms for pod "kube-apiserver-addons-411768" in "kube-system" namespace to be "Ready" ...
I0414 16:32:38.790778 157245 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-411768" in "kube-system" namespace to be "Ready" ...
I0414 16:32:38.849118 157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0414 16:32:38.900618 157245 pod_ready.go:93] pod "kube-controller-manager-addons-411768" in "kube-system" namespace has status "Ready":"True"
I0414 16:32:38.900648 157245 pod_ready.go:82] duration metric: took 109.861531ms for pod "kube-controller-manager-addons-411768" in "kube-system" namespace to be "Ready" ...
I0414 16:32:38.900661 157245 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-bvpxd" in "kube-system" namespace to be "Ready" ...
I0414 16:32:38.946501 157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0414 16:32:38.951752 157245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0414 16:32:38.953514 157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0414 16:32:39.302985 157245 pod_ready.go:93] pod "kube-proxy-bvpxd" in "kube-system" namespace has status "Ready":"True"
I0414 16:32:39.303009 157245 pod_ready.go:82] duration metric: took 402.341293ms for pod "kube-proxy-bvpxd" in "kube-system" namespace to be "Ready" ...
I0414 16:32:39.303023 157245 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-411768" in "kube-system" namespace to be "Ready" ...
I0414 16:32:39.349076 157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0414 16:32:39.447049 157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0414 16:32:39.451245 157245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0414 16:32:39.452991 157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0414 16:32:39.700762 157245 pod_ready.go:93] pod "kube-scheduler-addons-411768" in "kube-system" namespace has status "Ready":"True"
I0414 16:32:39.700786 157245 pod_ready.go:82] duration metric: took 397.756511ms for pod "kube-scheduler-addons-411768" in "kube-system" namespace to be "Ready" ...
I0414 16:32:39.700795 157245 pod_ready.go:79] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-fqwqf" in "kube-system" namespace to be "Ready" ...
I0414 16:32:39.848696 157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0414 16:32:39.946533 157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0414 16:32:39.951892 157245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0414 16:32:39.953500 157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0414 16:32:40.101448 157245 pod_ready.go:93] pod "nvidia-device-plugin-daemonset-fqwqf" in "kube-system" namespace has status "Ready":"True"
I0414 16:32:40.101473 157245 pod_ready.go:82] duration metric: took 400.672099ms for pod "nvidia-device-plugin-daemonset-fqwqf" in "kube-system" namespace to be "Ready" ...
I0414 16:32:40.101484 157245 pod_ready.go:39] duration metric: took 29.919990415s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0414 16:32:40.101509 157245 api_server.go:52] waiting for apiserver process to appear ...
I0414 16:32:40.101578 157245 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0414 16:32:40.120824 157245 api_server.go:72] duration metric: took 33.042211856s to wait for apiserver process to appear ...
I0414 16:32:40.120853 157245 api_server.go:88] waiting for apiserver healthz status ...
I0414 16:32:40.120877 157245 api_server.go:253] Checking apiserver healthz at https://192.168.39.237:8443/healthz ...
I0414 16:32:40.125585 157245 api_server.go:279] https://192.168.39.237:8443/healthz returned 200:
ok
I0414 16:32:40.126714 157245 api_server.go:141] control plane version: v1.32.2
I0414 16:32:40.126735 157245 api_server.go:131] duration metric: took 5.873926ms to wait for apiserver health ...
I0414 16:32:40.126745 157245 system_pods.go:43] waiting for kube-system pods to appear ...
I0414 16:32:40.303844 157245 system_pods.go:59] 18 kube-system pods found
I0414 16:32:40.303885 157245 system_pods.go:61] "amd-gpu-device-plugin-5sprs" [36ab44cd-e5cd-47dc-97c9-9b9566809a07] Running
I0414 16:32:40.303893 157245 system_pods.go:61] "coredns-668d6bf9bc-4wbtn" [efde3561-f910-4083-a045-d58c8fdcf7f5] Running
I0414 16:32:40.303904 157245 system_pods.go:61] "csi-hostpath-attacher-0" [ed55eafd-36ee-4183-9d67-d584935ba068] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
I0414 16:32:40.303912 157245 system_pods.go:61] "csi-hostpath-resizer-0" [1c5ebede-4ffc-4554-98e8-6b877134818e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
I0414 16:32:40.303927 157245 system_pods.go:61] "csi-hostpathplugin-mh59q" [b4e6a15d-c481-4c65-8460-c1e3cd4fd26a] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
I0414 16:32:40.303940 157245 system_pods.go:61] "etcd-addons-411768" [982b0174-0ec5-4ad4-915e-d2ce2e6ac0af] Running
I0414 16:32:40.303946 157245 system_pods.go:61] "kube-apiserver-addons-411768" [78984217-deb9-4f02-8509-1d209433f3bc] Running
I0414 16:32:40.303951 157245 system_pods.go:61] "kube-controller-manager-addons-411768" [1a00c32b-242d-4a58-988c-8eeadd7b5e47] Running
I0414 16:32:40.303956 157245 system_pods.go:61] "kube-ingress-dns-minikube" [d52dc595-cef9-487e-9ae1-d5f31774779b] Running
I0414 16:32:40.303960 157245 system_pods.go:61] "kube-proxy-bvpxd" [240f2e9d-199b-4666-8144-1af7bb751178] Running
I0414 16:32:40.303964 157245 system_pods.go:61] "kube-scheduler-addons-411768" [5f5f1249-7d39-44d1-9dc3-046dda9255c1] Running
I0414 16:32:40.303971 157245 system_pods.go:61] "metrics-server-7fbb699795-s4bdh" [fb315cc6-a736-467a-8f3f-7e48a315f789] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
I0414 16:32:40.303977 157245 system_pods.go:61] "nvidia-device-plugin-daemonset-fqwqf" [e5a2d34f-7429-47b0-9239-917c6907123c] Running
I0414 16:32:40.303985 157245 system_pods.go:61] "registry-6c88467877-5vmwg" [e9f17d14-6916-4171-aba7-15b3d6dab565] Running
I0414 16:32:40.303992 157245 system_pods.go:61] "registry-proxy-bpsmn" [998c1dc5-a7ac-4e6d-a29f-01c054cb33e9] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
I0414 16:32:40.304004 157245 system_pods.go:61] "snapshot-controller-68b874b76f-25g84" [ae23e104-95da-40ae-80b9-0400fb264d20] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
I0414 16:32:40.304018 157245 system_pods.go:61] "snapshot-controller-68b874b76f-8gfxk" [08f31c66-3f2c-442e-bed1-d74113220a4c] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
I0414 16:32:40.304027 157245 system_pods.go:61] "storage-provisioner" [016d9cef-9f4d-4edc-9108-2b5b76533cc7] Running
I0414 16:32:40.304044 157245 system_pods.go:74] duration metric: took 177.290903ms to wait for pod list to return data ...
I0414 16:32:40.304057 157245 default_sa.go:34] waiting for default service account to be created ...
I0414 16:32:40.349238 157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0414 16:32:40.448144 157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0414 16:32:40.451263 157245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0414 16:32:40.453751 157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0414 16:32:40.501406 157245 default_sa.go:45] found service account: "default"
I0414 16:32:40.501439 157245 default_sa.go:55] duration metric: took 197.36909ms for default service account to be created ...
I0414 16:32:40.501452 157245 system_pods.go:116] waiting for k8s-apps to be running ...
I0414 16:32:40.704542 157245 system_pods.go:86] 18 kube-system pods found
I0414 16:32:40.704628 157245 system_pods.go:89] "amd-gpu-device-plugin-5sprs" [36ab44cd-e5cd-47dc-97c9-9b9566809a07] Running
I0414 16:32:40.704652 157245 system_pods.go:89] "coredns-668d6bf9bc-4wbtn" [efde3561-f910-4083-a045-d58c8fdcf7f5] Running
I0414 16:32:40.704673 157245 system_pods.go:89] "csi-hostpath-attacher-0" [ed55eafd-36ee-4183-9d67-d584935ba068] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
I0414 16:32:40.704695 157245 system_pods.go:89] "csi-hostpath-resizer-0" [1c5ebede-4ffc-4554-98e8-6b877134818e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
I0414 16:32:40.704725 157245 system_pods.go:89] "csi-hostpathplugin-mh59q" [b4e6a15d-c481-4c65-8460-c1e3cd4fd26a] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
I0414 16:32:40.704740 157245 system_pods.go:89] "etcd-addons-411768" [982b0174-0ec5-4ad4-915e-d2ce2e6ac0af] Running
I0414 16:32:40.704755 157245 system_pods.go:89] "kube-apiserver-addons-411768" [78984217-deb9-4f02-8509-1d209433f3bc] Running
I0414 16:32:40.704773 157245 system_pods.go:89] "kube-controller-manager-addons-411768" [1a00c32b-242d-4a58-988c-8eeadd7b5e47] Running
I0414 16:32:40.704789 157245 system_pods.go:89] "kube-ingress-dns-minikube" [d52dc595-cef9-487e-9ae1-d5f31774779b] Running
I0414 16:32:40.704804 157245 system_pods.go:89] "kube-proxy-bvpxd" [240f2e9d-199b-4666-8144-1af7bb751178] Running
I0414 16:32:40.704815 157245 system_pods.go:89] "kube-scheduler-addons-411768" [5f5f1249-7d39-44d1-9dc3-046dda9255c1] Running
I0414 16:32:40.704828 157245 system_pods.go:89] "metrics-server-7fbb699795-s4bdh" [fb315cc6-a736-467a-8f3f-7e48a315f789] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
I0414 16:32:40.704834 157245 system_pods.go:89] "nvidia-device-plugin-daemonset-fqwqf" [e5a2d34f-7429-47b0-9239-917c6907123c] Running
I0414 16:32:40.704841 157245 system_pods.go:89] "registry-6c88467877-5vmwg" [e9f17d14-6916-4171-aba7-15b3d6dab565] Running
I0414 16:32:40.704854 157245 system_pods.go:89] "registry-proxy-bpsmn" [998c1dc5-a7ac-4e6d-a29f-01c054cb33e9] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
I0414 16:32:40.704878 157245 system_pods.go:89] "snapshot-controller-68b874b76f-25g84" [ae23e104-95da-40ae-80b9-0400fb264d20] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
I0414 16:32:40.704891 157245 system_pods.go:89] "snapshot-controller-68b874b76f-8gfxk" [08f31c66-3f2c-442e-bed1-d74113220a4c] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
I0414 16:32:40.704900 157245 system_pods.go:89] "storage-provisioner" [016d9cef-9f4d-4edc-9108-2b5b76533cc7] Running
I0414 16:32:40.704918 157245 system_pods.go:126] duration metric: took 203.457402ms to wait for k8s-apps to be running ...
I0414 16:32:40.704931 157245 system_svc.go:44] waiting for kubelet service to be running ....
I0414 16:32:40.704991 157245 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I0414 16:32:40.744640 157245 system_svc.go:56] duration metric: took 39.697803ms WaitForService to wait for kubelet
I0414 16:32:40.744678 157245 kubeadm.go:582] duration metric: took 33.666067886s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I0414 16:32:40.744702 157245 node_conditions.go:102] verifying NodePressure condition ...
I0414 16:32:40.848296 157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0414 16:32:40.900910 157245 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
I0414 16:32:40.900950 157245 node_conditions.go:123] node cpu capacity is 2
I0414 16:32:40.900969 157245 node_conditions.go:105] duration metric: took 156.260423ms to run NodePressure ...
I0414 16:32:40.900986 157245 start.go:241] waiting for startup goroutines ...
I0414 16:32:40.948060 157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0414 16:32:40.952210 157245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0414 16:32:40.954330 157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0414 16:32:41.349207 157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0414 16:32:41.449925 157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0414 16:32:41.451895 157245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0414 16:32:41.454102 157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0414 16:32:41.849540 157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0414 16:32:41.946801 157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0414 16:32:41.952175 157245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0414 16:32:41.953995 157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0414 16:32:42.348874 157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0414 16:32:42.447461 157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0414 16:32:42.452190 157245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0414 16:32:42.453732 157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0414 16:32:42.848731 157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0414 16:32:42.946994 157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0414 16:32:42.956352 157245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0414 16:32:42.956446 157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0414 16:32:43.348408 157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0414 16:32:43.446293 157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0414 16:32:43.451711 157245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0414 16:32:43.453538 157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0414 16:32:43.849130 157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0414 16:32:43.947364 157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0414 16:32:43.951553 157245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0414 16:32:43.953132 157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0414 16:32:44.348343 157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0414 16:32:44.447463 157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0414 16:32:44.451687 157245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0414 16:32:44.453447 157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0414 16:32:44.848569 157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0414 16:32:44.947455 157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0414 16:32:44.951433 157245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0414 16:32:44.953252 157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0414 16:32:45.348062 157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0414 16:32:45.447616 157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0414 16:32:45.458880 157245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0414 16:32:45.459552 157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0414 16:32:45.849274 157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0414 16:32:45.947491 157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0414 16:32:45.951883 157245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0414 16:32:45.953345 157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0414 16:32:46.348451 157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0414 16:32:46.447192 157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0414 16:32:46.451277 157245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0414 16:32:46.453137 157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0414 16:32:46.848083 157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0414 16:32:46.949577 157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0414 16:32:46.951882 157245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0414 16:32:46.953946 157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0414 16:32:47.349070 157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0414 16:32:47.447242 157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0414 16:32:47.453140 157245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0414 16:32:47.460714 157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0414 16:32:47.851286 157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0414 16:32:47.973211 157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0414 16:32:47.973545 157245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0414 16:32:47.973649 157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0414 16:32:48.349452 157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0414 16:32:48.446643 157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0414 16:32:48.451811 157245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0414 16:32:48.453443 157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0414 16:32:48.849507 157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0414 16:32:48.948358 157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0414 16:32:48.952345 157245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0414 16:32:48.953621 157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0414 16:32:49.348680 157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0414 16:32:49.446684 157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0414 16:32:49.451765 157245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0414 16:32:49.453436 157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0414 16:32:49.848769 157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0414 16:32:49.949347 157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0414 16:32:49.951365 157245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0414 16:32:49.955536 157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0414 16:32:50.348756 157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0414 16:32:50.447095 157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0414 16:32:50.451217 157245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0414 16:32:50.454357 157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0414 16:32:50.848812 157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0414 16:32:50.946721 157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0414 16:32:50.951807 157245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0414 16:32:50.953312 157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0414 16:32:51.348748 157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0414 16:32:51.449754 157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0414 16:32:51.451424 157245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0414 16:32:51.455933 157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0414 16:32:51.849941 157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0414 16:32:51.954605 157245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0414 16:32:51.954789 157245 kapi.go:107] duration metric: took 36.003587521s to wait for kubernetes.io/minikube-addons=registry ...
I0414 16:32:51.955284 157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0414 16:32:52.349233 157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0414 16:32:52.447475 157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0414 16:32:52.451848 157245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0414 16:32:53.172649 157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0414 16:32:53.172676 157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0414 16:32:53.172696 157245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0414 16:32:53.349972 157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0414 16:32:53.446815 157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0414 16:32:53.452137 157245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0414 16:32:53.849416 157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0414 16:32:53.946973 157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0414 16:32:53.951898 157245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0414 16:32:54.349376 157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0414 16:32:54.447701 157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0414 16:32:54.452200 157245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0414 16:32:54.849182 157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0414 16:32:54.947048 157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0414 16:32:54.951010 157245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0414 16:32:55.349214 157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0414 16:32:55.450278 157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0414 16:32:55.451912 157245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0414 16:32:55.848716 157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0414 16:32:56.057257 157245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0414 16:32:56.057395 157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0414 16:32:56.348195 157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0414 16:32:56.447145 157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0414 16:32:56.451650 157245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0414 16:32:56.850073 157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0414 16:32:56.947838 157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0414 16:32:56.952376 157245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0414 16:32:57.349507 157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0414 16:32:57.447095 157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0414 16:32:57.451199 157245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0414 16:32:57.849066 157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0414 16:32:57.948492 157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0414 16:32:57.952815 157245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0414 16:32:58.350477 157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0414 16:32:58.449774 157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0414 16:32:58.452698 157245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0414 16:32:58.848386 157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0414 16:32:58.946694 157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0414 16:32:58.952283 157245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0414 16:32:59.348399 157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0414 16:32:59.447527 157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0414 16:32:59.452414 157245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0414 16:32:59.847990 157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0414 16:32:59.986489 157245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0414 16:32:59.987975 157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0414 16:33:00.349010 157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0414 16:33:00.448767 157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0414 16:33:00.452053 157245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0414 16:33:00.849430 157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0414 16:33:00.952275 157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0414 16:33:00.954676 157245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0414 16:33:01.349153 157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0414 16:33:01.448527 157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0414 16:33:01.456254 157245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0414 16:33:01.848043 157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0414 16:33:01.947603 157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0414 16:33:01.953318 157245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0414 16:33:02.349283 157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0414 16:33:02.447325 157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0414 16:33:02.453051 157245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0414 16:33:02.855651 157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0414 16:33:02.947573 157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0414 16:33:02.952588 157245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0414 16:33:03.348808 157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0414 16:33:03.450311 157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0414 16:33:03.460697 157245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0414 16:33:03.851384 157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0414 16:33:03.947183 157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0414 16:33:03.951624 157245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0414 16:33:04.348396 157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0414 16:33:04.447643 157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0414 16:33:04.451554 157245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0414 16:33:04.848571 157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0414 16:33:04.949789 157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0414 16:33:04.951979 157245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0414 16:33:05.350356 157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0414 16:33:05.450935 157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0414 16:33:05.453558 157245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0414 16:33:05.847928 157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0414 16:33:05.946542 157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0414 16:33:05.951883 157245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0414 16:33:06.350345 157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0414 16:33:06.450847 157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0414 16:33:06.453394 157245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0414 16:33:06.848394 157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0414 16:33:06.946485 157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0414 16:33:06.951828 157245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0414 16:33:07.349692 157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0414 16:33:07.447385 157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0414 16:33:07.452297 157245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0414 16:33:07.849393 157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0414 16:33:07.957936 157245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0414 16:33:07.958093 157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0414 16:33:08.355339 157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0414 16:33:08.447917 157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0414 16:33:08.453897 157245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0414 16:33:08.849195 157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0414 16:33:08.947667 157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0414 16:33:08.952021 157245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0414 16:33:09.570096 157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0414 16:33:09.570104 157245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0414 16:33:09.570107 157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0414 16:33:09.848076 157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0414 16:33:09.947066 157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0414 16:33:09.951111 157245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0414 16:33:10.349636 157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0414 16:33:10.448136 157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0414 16:33:10.452143 157245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0414 16:33:10.851381 157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0414 16:33:10.952538 157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0414 16:33:10.953948 157245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0414 16:33:11.349901 157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0414 16:33:11.453477 157245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0414 16:33:11.453495 157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0414 16:33:11.848650 157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0414 16:33:11.947300 157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0414 16:33:11.952069 157245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0414 16:33:12.349743 157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0414 16:33:12.446712 157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0414 16:33:12.453687 157245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0414 16:33:12.848548 157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0414 16:33:12.946674 157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0414 16:33:12.952124 157245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0414 16:33:13.348656 157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0414 16:33:13.449230 157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0414 16:33:13.451702 157245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0414 16:33:13.848623 157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0414 16:33:13.947173 157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0414 16:33:13.951303 157245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0414 16:33:14.349905 157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0414 16:33:14.447739 157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0414 16:33:14.453919 157245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0414 16:33:14.849253 157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0414 16:33:14.947796 157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0414 16:33:14.952226 157245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0414 16:33:15.348251 157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0414 16:33:15.447244 157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0414 16:33:15.451223 157245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0414 16:33:15.848356 157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0414 16:33:15.947247 157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0414 16:33:15.951021 157245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0414 16:33:16.349536 157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0414 16:33:16.446833 157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0414 16:33:16.452103 157245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0414 16:33:16.849099 157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0414 16:33:16.947667 157245 kapi.go:107] duration metric: took 59.504288743s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
I0414 16:33:16.951675 157245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0414 16:33:17.349541 157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0414 16:33:17.452610 157245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0414 16:33:17.848578 157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0414 16:33:17.952559 157245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0414 16:33:18.349431 157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0414 16:33:18.452292 157245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0414 16:33:18.848861 157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0414 16:33:18.951382 157245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0414 16:33:19.348446 157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0414 16:33:19.452745 157245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0414 16:33:19.848520 157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0414 16:33:19.953515 157245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0414 16:33:20.349217 157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0414 16:33:20.452153 157245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0414 16:33:20.849767 157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0414 16:33:20.954626 157245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0414 16:33:21.349396 157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0414 16:33:21.453058 157245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0414 16:33:21.849393 157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0414 16:33:21.952752 157245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0414 16:33:22.349016 157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0414 16:33:22.452003 157245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0414 16:33:22.848787 157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0414 16:33:22.953552 157245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0414 16:33:23.348180 157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0414 16:33:23.452353 157245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0414 16:33:24.006289 157245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0414 16:33:24.006395 157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0414 16:33:24.348590 157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0414 16:33:24.452724 157245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0414 16:33:24.848755 157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0414 16:33:24.952902 157245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0414 16:33:25.349172 157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0414 16:33:25.452537 157245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0414 16:33:25.848724 157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0414 16:33:25.953008 157245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0414 16:33:26.376878 157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0414 16:33:26.626133 157245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0414 16:33:26.849564 157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0414 16:33:26.954952 157245 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0414 16:33:27.348973 157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0414 16:33:27.451663 157245 kapi.go:107] duration metric: took 1m11.502788648s to wait for app.kubernetes.io/name=ingress-nginx ...
I0414 16:33:27.848646 157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0414 16:33:28.348318 157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0414 16:33:28.849726 157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0414 16:33:29.348791 157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0414 16:33:29.848989 157245 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0414 16:33:30.351381 157245 kapi.go:107] duration metric: took 1m11.505944449s to wait for kubernetes.io/minikube-addons=gcp-auth ...
I0414 16:33:30.352770 157245 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-411768 cluster.
I0414 16:33:30.353958 157245 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
I0414 16:33:30.355101 157245 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
I0414 16:33:30.356325 157245 out.go:177] * Enabled addons: nvidia-device-plugin, cloud-spanner, amd-gpu-device-plugin, storage-provisioner, inspektor-gadget, metrics-server, ingress-dns, yakd, default-storageclass, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
I0414 16:33:30.357346 157245 addons.go:514] duration metric: took 1m23.278682879s for enable addons: enabled=[nvidia-device-plugin cloud-spanner amd-gpu-device-plugin storage-provisioner inspektor-gadget metrics-server ingress-dns yakd default-storageclass volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
I0414 16:33:30.357383 157245 start.go:246] waiting for cluster config update ...
I0414 16:33:30.357398 157245 start.go:255] writing updated cluster config ...
I0414 16:33:30.357640 157245 ssh_runner.go:195] Run: rm -f paused
I0414 16:33:30.409184 157245 start.go:600] kubectl: 1.32.3, cluster: 1.32.2 (minor skew: 0)
I0414 16:33:30.410660 157245 out.go:177] * Done! kubectl is now configured to use "addons-411768" cluster and "default" namespace by default
==> CRI-O <==
Apr 14 16:36:28 addons-411768 crio[665]: time="2025-04-14 16:36:28.408091865Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1744648588408069639,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:604414,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c3662606-6411-4324-bee1-937418bbdbbb name=/runtime.v1.ImageService/ImageFsInfo
Apr 14 16:36:28 addons-411768 crio[665]: time="2025-04-14 16:36:28.414932317Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4bee1329-b335-4182-a8cb-803574dcd2f3 name=/runtime.v1.RuntimeService/ListContainers
Apr 14 16:36:28 addons-411768 crio[665]: time="2025-04-14 16:36:28.414983326Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4bee1329-b335-4182-a8cb-803574dcd2f3 name=/runtime.v1.RuntimeService/ListContainers
Apr 14 16:36:28 addons-411768 crio[665]: time="2025-04-14 16:36:28.415306779Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:864bc3bd70a07950a38583f2b3082eb40373b9fcbd2af6feb6147d181d66a10c,PodSandboxId:da1a9986402173c5f2d5aeacb8e485646b508681d0808420010e0152c6bd6873,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1744648588234839801,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-7d9564db4-9xf4s,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c049a27d-8170-46f8-8ed9-29e70b408cdb,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.p
orts: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd6e08abce008c9511b4593999124a65069fa49ea9441c21f99e670c293a1068,PodSandboxId:b990b1184119db432e80765ab867512c16cc942ab1222529874f6ad764768338,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:4ff102c5d78d254a6f0da062b3cf39eaf07f01eec0927fd21e219d0af8bc0591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1ff4bb4faebcfb1f7e01144fa9904a570ab9bab88694457855feb6c6bba3fa07,State:CONTAINER_RUNNING,CreatedAt:1744648450683029678,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c6fdc475-449b-4a8c-a72c-3d42ef531b1c,},Annotations:map[string]string{io.kubernete
s.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ef31858c68b561febaa6af346a0f21d70ca9b2b3765e2305e84d0e21d69fb6d,PodSandboxId:5e0446e945c2909449ffc2b06a012b560037888fdd4e9e4681dd6cd08d9fa4b5,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1744648413638559611,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d2799830-4b53-4013-83
79-64bfa1b342a4,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5d31f652b18cc075da33290ebdcfe04719706fb046161f256ed9b2ac18362871,PodSandboxId:97bf244602bf97382a9da647ec38bbb0c9e835984d41a077a69dc3423faeaca3,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:62b61c42ec8dd877b85c0aa24c4744ce44d274bc16cc5d2364edfe67964ba55b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ee44bc2368033ba6147d84fb376356de1e40e4778c20dd8b4817bd1636121ddf,State:CONTAINER_RUNNING,CreatedAt:1744648406753083055,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-56d7c84fd4-h2flx,io.kubernetes.pod.namespace: ingress-nginx,io
.kubernetes.pod.uid: 58048470-a007-4f79-9b05-cc4fe6169041,},Annotations:map[string]string{io.kubernetes.container.hash: 4e8eee94,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:7999b897d790728b8ef63657c9e4f11780d63187673cd9f0054d4e4aa6b8444f,PodSandboxId:e43e8c82e534431759c14e648906333bd2965fdefb303c24f8176a1402fb2630,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff
8063d2b37b9fd3f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a62eeff05ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,State:CONTAINER_EXITED,CreatedAt:1744648386374515997,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-qtplm,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 56e4893f-e50c-4e07-aa67-5eac91793235,},Annotations:map[string]string{io.kubernetes.container.hash: 3f610496,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ab025d3e20c73a414b15ccf9be6abad1ec88729a22aa7861a906948af8397b6a,PodSandboxId:f1fda8ad102985b1525fc3315de4e8205419dd271ca9bd87830894475e3ac0f7,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbf
bb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a62eeff05ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,State:CONTAINER_EXITED,CreatedAt:1744648383322218841,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-lqjfj,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: e8fa21e9-a3b4-4266-9b5b-5bd2b8518b0b,},Annotations:map[string]string{io.kubernetes.container.hash: fe18a2bf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e305f3d82f4b8539b9c0d7ed4cdf6fac70e369fdd6f0230eee3b9bd5535ab1a2,PodSandboxId:1ae0af87b54a7e3c171ae8ba4c21025bab00e508f1206005eaf9030b091edac2,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher
/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1744648367937333586,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-76f89f99b5-vxlrn,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 123bd649-9c06-4a40-8c9e-88219f0ea2e3,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:72b0e851c40380144991f853f0683e4ad9ceb9e34f4bdff113726c0d58980165,PodSandboxId:6e8c72550080eeaeb065fffd901f77cd8d486c92dee0ae5bb8b0ee4fa28ba039,Metadata:&ContainerMetadata{Name:amd-gp
u-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1744648357458269550,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-5sprs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 36ab44cd-e5cd-47dc-97c9-9b9566809a07,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0e84062959d36d1d67c2b57e182898fa6d4c8c88812627de01b73a6e779bd6be,PodSandboxId:81e1ef9d686de5f615e5a2cadbc014819acaaa42741e55eea96a6a080a6d179b
,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:30dd67412fdea30479de8d5d9bf760870308d24d911c59ea1f1757f04c33cc29,State:CONTAINER_RUNNING,CreatedAt:1744648355908772322,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d52dc595-cef9-487e-9ae1-d5f31774779b,},Annotations:map[string]string{io.kubernetes.container.hash: 8778d474,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:693f2
719e9989206f59406fd47ff21a2f797f42b3d5eef599c90b28543648564,PodSandboxId:40f2526f24fa491b727aaa3eefce3c3983282312c9d3f31cd6f1afea049852e8,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1744648333887348299,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 016d9cef-9f4d-4edc-9108-2b5b76533cc7,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e4e3fb96f0ab62db9
2d5a8f7a2fb9d4ef6e2cffd618c8687aa77a2bcc1d057d7,PodSandboxId:f37590b0584b54ef015afff93bfd7470ebadcaba595881881a89d1152a6edd45,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1744648331230083492,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-4wbtn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: efde3561-f910-4083-a045-d58c8fdcf7f5,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubern
etes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:71aaac2f1ac40ebb047bfc5db4fbac4d0313010afb806284b4155772309d8411,PodSandboxId:7aa9c0248892779b42800bf5da99b04ba2e120c0c4007dd635f40855c8dd750b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,State:CONTAINER_RUNNING,CreatedAt:1744648328482796000,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-bvpxd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 240f2e9d-199b-4666-8144-1af7bb751178,},Annotations:map[string]string{io.kubernetes.container.hash: b4fecc5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bd9cf5e8aa14acadbac81dfd28f80ab479fe366057d18bfc45fb36632905fc67,PodSandboxId:e3f03985ff064594539048ff54615a10308a18d629901f25b55272bccfdc6c03,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,State:CONTAINER_RUNNING,CreatedAt:1744648317361852695,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-411768,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 240bc76e760adf0d34e672e8e10bfb1f,},Annotations:map[string]string{io.kubernetes.container.hash: 51692d3d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminati
onMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bbc1ab888a3c6f4a225e1a7f6b60416c25be2ffe945a16322fb6ee42d1623769,PodSandboxId:3419ef801ad4e6e1e5b0c26d90bccfa60f2a7898a8b8462f9d6cc36f00ee6802,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1744648317402210134,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-411768,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8827044f306ba1d367ed9bf7b6d0c8db,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernet
es.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:23453a182488dc6f3e93ca6d056ec74f483a6b13d78ea67e71d08d2d45579a20,PodSandboxId:cbe8e675b255438d6953d82b2ce141ac3bf4bfb4eb9b99bf5f2856609a06d960,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,State:CONTAINER_RUNNING,CreatedAt:1744648317379828906,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-411768,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 50f9dbe492a9a16ef8bdd576105b9300,},Annotations:map[string]string{io.kubernetes.container.hash: 4c5aaea3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.term
inationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a2df5cd32b3417b29f619c5242ee5090669d94922a7a53db3179ee7e1332cc3,PodSandboxId:7b1c6bea47c8c09d4f6c05df59ad94fecb1f794e9a27db34685a976d59f93aff,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,State:CONTAINER_RUNNING,CreatedAt:1744648317327529934,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-411768,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e5658f9d0e92ff7a619dc5f35f6f2df6,},Annotations:map[string]string{io.kubernetes.container.hash: 7745040f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePol
icy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=4bee1329-b335-4182-a8cb-803574dcd2f3 name=/runtime.v1.RuntimeService/ListContainers
Apr 14 16:36:28 addons-411768 crio[665]: time="2025-04-14 16:36:28.464191576Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=8d1f1d70-9ed3-4656-b864-fac6729bf900 name=/runtime.v1.RuntimeService/Version
Apr 14 16:36:28 addons-411768 crio[665]: time="2025-04-14 16:36:28.464260326Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=8d1f1d70-9ed3-4656-b864-fac6729bf900 name=/runtime.v1.RuntimeService/Version
Apr 14 16:36:28 addons-411768 crio[665]: time="2025-04-14 16:36:28.465207887Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=fadf3eb1-fd77-4bad-aef7-1f7561e7287e name=/runtime.v1.ImageService/ImageFsInfo
Apr 14 16:36:28 addons-411768 crio[665]: time="2025-04-14 16:36:28.466627739Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1744648588466602013,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:604414,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=fadf3eb1-fd77-4bad-aef7-1f7561e7287e name=/runtime.v1.ImageService/ImageFsInfo
Apr 14 16:36:28 addons-411768 crio[665]: time="2025-04-14 16:36:28.467017229Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=46816f18-1e01-4e86-87e1-81e3bb7680a7 name=/runtime.v1.RuntimeService/ListContainers
Apr 14 16:36:28 addons-411768 crio[665]: time="2025-04-14 16:36:28.467095490Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=46816f18-1e01-4e86-87e1-81e3bb7680a7 name=/runtime.v1.RuntimeService/ListContainers
Apr 14 16:36:28 addons-411768 crio[665]: time="2025-04-14 16:36:28.467507182Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:864bc3bd70a07950a38583f2b3082eb40373b9fcbd2af6feb6147d181d66a10c,PodSandboxId:da1a9986402173c5f2d5aeacb8e485646b508681d0808420010e0152c6bd6873,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1744648588234839801,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-7d9564db4-9xf4s,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c049a27d-8170-46f8-8ed9-29e70b408cdb,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.p
orts: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd6e08abce008c9511b4593999124a65069fa49ea9441c21f99e670c293a1068,PodSandboxId:b990b1184119db432e80765ab867512c16cc942ab1222529874f6ad764768338,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:4ff102c5d78d254a6f0da062b3cf39eaf07f01eec0927fd21e219d0af8bc0591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1ff4bb4faebcfb1f7e01144fa9904a570ab9bab88694457855feb6c6bba3fa07,State:CONTAINER_RUNNING,CreatedAt:1744648450683029678,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c6fdc475-449b-4a8c-a72c-3d42ef531b1c,},Annotations:map[string]string{io.kubernete
s.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ef31858c68b561febaa6af346a0f21d70ca9b2b3765e2305e84d0e21d69fb6d,PodSandboxId:5e0446e945c2909449ffc2b06a012b560037888fdd4e9e4681dd6cd08d9fa4b5,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1744648413638559611,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d2799830-4b53-4013-83
79-64bfa1b342a4,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5d31f652b18cc075da33290ebdcfe04719706fb046161f256ed9b2ac18362871,PodSandboxId:97bf244602bf97382a9da647ec38bbb0c9e835984d41a077a69dc3423faeaca3,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:62b61c42ec8dd877b85c0aa24c4744ce44d274bc16cc5d2364edfe67964ba55b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ee44bc2368033ba6147d84fb376356de1e40e4778c20dd8b4817bd1636121ddf,State:CONTAINER_RUNNING,CreatedAt:1744648406753083055,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-56d7c84fd4-h2flx,io.kubernetes.pod.namespace: ingress-nginx,io
.kubernetes.pod.uid: 58048470-a007-4f79-9b05-cc4fe6169041,},Annotations:map[string]string{io.kubernetes.container.hash: 4e8eee94,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:7999b897d790728b8ef63657c9e4f11780d63187673cd9f0054d4e4aa6b8444f,PodSandboxId:e43e8c82e534431759c14e648906333bd2965fdefb303c24f8176a1402fb2630,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff
8063d2b37b9fd3f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a62eeff05ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,State:CONTAINER_EXITED,CreatedAt:1744648386374515997,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-qtplm,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 56e4893f-e50c-4e07-aa67-5eac91793235,},Annotations:map[string]string{io.kubernetes.container.hash: 3f610496,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ab025d3e20c73a414b15ccf9be6abad1ec88729a22aa7861a906948af8397b6a,PodSandboxId:f1fda8ad102985b1525fc3315de4e8205419dd271ca9bd87830894475e3ac0f7,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbf
bb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a62eeff05ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,State:CONTAINER_EXITED,CreatedAt:1744648383322218841,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-lqjfj,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: e8fa21e9-a3b4-4266-9b5b-5bd2b8518b0b,},Annotations:map[string]string{io.kubernetes.container.hash: fe18a2bf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e305f3d82f4b8539b9c0d7ed4cdf6fac70e369fdd6f0230eee3b9bd5535ab1a2,PodSandboxId:1ae0af87b54a7e3c171ae8ba4c21025bab00e508f1206005eaf9030b091edac2,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher
/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1744648367937333586,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-76f89f99b5-vxlrn,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 123bd649-9c06-4a40-8c9e-88219f0ea2e3,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:72b0e851c40380144991f853f0683e4ad9ceb9e34f4bdff113726c0d58980165,PodSandboxId:6e8c72550080eeaeb065fffd901f77cd8d486c92dee0ae5bb8b0ee4fa28ba039,Metadata:&ContainerMetadata{Name:amd-gp
u-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1744648357458269550,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-5sprs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 36ab44cd-e5cd-47dc-97c9-9b9566809a07,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0e84062959d36d1d67c2b57e182898fa6d4c8c88812627de01b73a6e779bd6be,PodSandboxId:81e1ef9d686de5f615e5a2cadbc014819acaaa42741e55eea96a6a080a6d179b
,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:30dd67412fdea30479de8d5d9bf760870308d24d911c59ea1f1757f04c33cc29,State:CONTAINER_RUNNING,CreatedAt:1744648355908772322,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d52dc595-cef9-487e-9ae1-d5f31774779b,},Annotations:map[string]string{io.kubernetes.container.hash: 8778d474,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:693f2
719e9989206f59406fd47ff21a2f797f42b3d5eef599c90b28543648564,PodSandboxId:40f2526f24fa491b727aaa3eefce3c3983282312c9d3f31cd6f1afea049852e8,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1744648333887348299,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 016d9cef-9f4d-4edc-9108-2b5b76533cc7,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e4e3fb96f0ab62db9
2d5a8f7a2fb9d4ef6e2cffd618c8687aa77a2bcc1d057d7,PodSandboxId:f37590b0584b54ef015afff93bfd7470ebadcaba595881881a89d1152a6edd45,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1744648331230083492,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-4wbtn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: efde3561-f910-4083-a045-d58c8fdcf7f5,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubern
etes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:71aaac2f1ac40ebb047bfc5db4fbac4d0313010afb806284b4155772309d8411,PodSandboxId:7aa9c0248892779b42800bf5da99b04ba2e120c0c4007dd635f40855c8dd750b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,State:CONTAINER_RUNNING,CreatedAt:1744648328482796000,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-bvpxd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 240f2e9d-199b-4666-8144-1af7bb751178,},Annotations:map[string]string{io.kubernetes.container.hash: b4fecc5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bd9cf5e8aa14acadbac81dfd28f80ab479fe366057d18bfc45fb36632905fc67,PodSandboxId:e3f03985ff064594539048ff54615a10308a18d629901f25b55272bccfdc6c03,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,State:CONTAINER_RUNNING,CreatedAt:1744648317361852695,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-411768,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 240bc76e760adf0d34e672e8e10bfb1f,},Annotations:map[string]string{io.kubernetes.container.hash: 51692d3d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminati
onMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bbc1ab888a3c6f4a225e1a7f6b60416c25be2ffe945a16322fb6ee42d1623769,PodSandboxId:3419ef801ad4e6e1e5b0c26d90bccfa60f2a7898a8b8462f9d6cc36f00ee6802,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1744648317402210134,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-411768,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8827044f306ba1d367ed9bf7b6d0c8db,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernet
es.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:23453a182488dc6f3e93ca6d056ec74f483a6b13d78ea67e71d08d2d45579a20,PodSandboxId:cbe8e675b255438d6953d82b2ce141ac3bf4bfb4eb9b99bf5f2856609a06d960,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,State:CONTAINER_RUNNING,CreatedAt:1744648317379828906,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-411768,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 50f9dbe492a9a16ef8bdd576105b9300,},Annotations:map[string]string{io.kubernetes.container.hash: 4c5aaea3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.term
inationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a2df5cd32b3417b29f619c5242ee5090669d94922a7a53db3179ee7e1332cc3,PodSandboxId:7b1c6bea47c8c09d4f6c05df59ad94fecb1f794e9a27db34685a976d59f93aff,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,State:CONTAINER_RUNNING,CreatedAt:1744648317327529934,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-411768,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e5658f9d0e92ff7a619dc5f35f6f2df6,},Annotations:map[string]string{io.kubernetes.container.hash: 7745040f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePol
icy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=46816f18-1e01-4e86-87e1-81e3bb7680a7 name=/runtime.v1.RuntimeService/ListContainers
Apr 14 16:36:28 addons-411768 crio[665]: time="2025-04-14 16:36:28.501354938Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=31883136-7842-49fa-bfc4-d7a058fb9fa9 name=/runtime.v1.RuntimeService/Version
Apr 14 16:36:28 addons-411768 crio[665]: time="2025-04-14 16:36:28.501488890Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=31883136-7842-49fa-bfc4-d7a058fb9fa9 name=/runtime.v1.RuntimeService/Version
Apr 14 16:36:28 addons-411768 crio[665]: time="2025-04-14 16:36:28.502624438Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f3c50c56-12cc-4170-98a4-1d7d8011f4d5 name=/runtime.v1.ImageService/ImageFsInfo
Apr 14 16:36:28 addons-411768 crio[665]: time="2025-04-14 16:36:28.504177422Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1744648588504099379,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:604414,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f3c50c56-12cc-4170-98a4-1d7d8011f4d5 name=/runtime.v1.ImageService/ImageFsInfo
Apr 14 16:36:28 addons-411768 crio[665]: time="2025-04-14 16:36:28.505119255Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e3dc87da-52b4-42aa-aa41-d9c85e5f2fb3 name=/runtime.v1.RuntimeService/ListContainers
Apr 14 16:36:28 addons-411768 crio[665]: time="2025-04-14 16:36:28.505175135Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e3dc87da-52b4-42aa-aa41-d9c85e5f2fb3 name=/runtime.v1.RuntimeService/ListContainers
Apr 14 16:36:28 addons-411768 crio[665]: time="2025-04-14 16:36:28.505916114Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:864bc3bd70a07950a38583f2b3082eb40373b9fcbd2af6feb6147d181d66a10c,PodSandboxId:da1a9986402173c5f2d5aeacb8e485646b508681d0808420010e0152c6bd6873,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1744648588234839801,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-7d9564db4-9xf4s,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c049a27d-8170-46f8-8ed9-29e70b408cdb,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.p
orts: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd6e08abce008c9511b4593999124a65069fa49ea9441c21f99e670c293a1068,PodSandboxId:b990b1184119db432e80765ab867512c16cc942ab1222529874f6ad764768338,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:4ff102c5d78d254a6f0da062b3cf39eaf07f01eec0927fd21e219d0af8bc0591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1ff4bb4faebcfb1f7e01144fa9904a570ab9bab88694457855feb6c6bba3fa07,State:CONTAINER_RUNNING,CreatedAt:1744648450683029678,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c6fdc475-449b-4a8c-a72c-3d42ef531b1c,},Annotations:map[string]string{io.kubernete
s.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ef31858c68b561febaa6af346a0f21d70ca9b2b3765e2305e84d0e21d69fb6d,PodSandboxId:5e0446e945c2909449ffc2b06a012b560037888fdd4e9e4681dd6cd08d9fa4b5,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1744648413638559611,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d2799830-4b53-4013-83
79-64bfa1b342a4,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5d31f652b18cc075da33290ebdcfe04719706fb046161f256ed9b2ac18362871,PodSandboxId:97bf244602bf97382a9da647ec38bbb0c9e835984d41a077a69dc3423faeaca3,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:62b61c42ec8dd877b85c0aa24c4744ce44d274bc16cc5d2364edfe67964ba55b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ee44bc2368033ba6147d84fb376356de1e40e4778c20dd8b4817bd1636121ddf,State:CONTAINER_RUNNING,CreatedAt:1744648406753083055,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-56d7c84fd4-h2flx,io.kubernetes.pod.namespace: ingress-nginx,io
.kubernetes.pod.uid: 58048470-a007-4f79-9b05-cc4fe6169041,},Annotations:map[string]string{io.kubernetes.container.hash: 4e8eee94,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:7999b897d790728b8ef63657c9e4f11780d63187673cd9f0054d4e4aa6b8444f,PodSandboxId:e43e8c82e534431759c14e648906333bd2965fdefb303c24f8176a1402fb2630,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff
8063d2b37b9fd3f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a62eeff05ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,State:CONTAINER_EXITED,CreatedAt:1744648386374515997,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-qtplm,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 56e4893f-e50c-4e07-aa67-5eac91793235,},Annotations:map[string]string{io.kubernetes.container.hash: 3f610496,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ab025d3e20c73a414b15ccf9be6abad1ec88729a22aa7861a906948af8397b6a,PodSandboxId:f1fda8ad102985b1525fc3315de4e8205419dd271ca9bd87830894475e3ac0f7,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbf
bb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a62eeff05ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,State:CONTAINER_EXITED,CreatedAt:1744648383322218841,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-lqjfj,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: e8fa21e9-a3b4-4266-9b5b-5bd2b8518b0b,},Annotations:map[string]string{io.kubernetes.container.hash: fe18a2bf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e305f3d82f4b8539b9c0d7ed4cdf6fac70e369fdd6f0230eee3b9bd5535ab1a2,PodSandboxId:1ae0af87b54a7e3c171ae8ba4c21025bab00e508f1206005eaf9030b091edac2,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher
/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1744648367937333586,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-76f89f99b5-vxlrn,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 123bd649-9c06-4a40-8c9e-88219f0ea2e3,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:72b0e851c40380144991f853f0683e4ad9ceb9e34f4bdff113726c0d58980165,PodSandboxId:6e8c72550080eeaeb065fffd901f77cd8d486c92dee0ae5bb8b0ee4fa28ba039,Metadata:&ContainerMetadata{Name:amd-gp
u-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1744648357458269550,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-5sprs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 36ab44cd-e5cd-47dc-97c9-9b9566809a07,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0e84062959d36d1d67c2b57e182898fa6d4c8c88812627de01b73a6e779bd6be,PodSandboxId:81e1ef9d686de5f615e5a2cadbc014819acaaa42741e55eea96a6a080a6d179b
,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:30dd67412fdea30479de8d5d9bf760870308d24d911c59ea1f1757f04c33cc29,State:CONTAINER_RUNNING,CreatedAt:1744648355908772322,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d52dc595-cef9-487e-9ae1-d5f31774779b,},Annotations:map[string]string{io.kubernetes.container.hash: 8778d474,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:693f2
719e9989206f59406fd47ff21a2f797f42b3d5eef599c90b28543648564,PodSandboxId:40f2526f24fa491b727aaa3eefce3c3983282312c9d3f31cd6f1afea049852e8,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1744648333887348299,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 016d9cef-9f4d-4edc-9108-2b5b76533cc7,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e4e3fb96f0ab62db9
2d5a8f7a2fb9d4ef6e2cffd618c8687aa77a2bcc1d057d7,PodSandboxId:f37590b0584b54ef015afff93bfd7470ebadcaba595881881a89d1152a6edd45,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1744648331230083492,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-4wbtn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: efde3561-f910-4083-a045-d58c8fdcf7f5,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubern
etes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:71aaac2f1ac40ebb047bfc5db4fbac4d0313010afb806284b4155772309d8411,PodSandboxId:7aa9c0248892779b42800bf5da99b04ba2e120c0c4007dd635f40855c8dd750b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,State:CONTAINER_RUNNING,CreatedAt:1744648328482796000,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-bvpxd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 240f2e9d-199b-4666-8144-1af7bb751178,},Annotations:map[string]string{io.kubernetes.container.hash: b4fecc5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bd9cf5e8aa14acadbac81dfd28f80ab479fe366057d18bfc45fb36632905fc67,PodSandboxId:e3f03985ff064594539048ff54615a10308a18d629901f25b55272bccfdc6c03,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,State:CONTAINER_RUNNING,CreatedAt:1744648317361852695,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-411768,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 240bc76e760adf0d34e672e8e10bfb1f,},Annotations:map[string]string{io.kubernetes.container.hash: 51692d3d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminati
onMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bbc1ab888a3c6f4a225e1a7f6b60416c25be2ffe945a16322fb6ee42d1623769,PodSandboxId:3419ef801ad4e6e1e5b0c26d90bccfa60f2a7898a8b8462f9d6cc36f00ee6802,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1744648317402210134,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-411768,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8827044f306ba1d367ed9bf7b6d0c8db,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernet
es.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:23453a182488dc6f3e93ca6d056ec74f483a6b13d78ea67e71d08d2d45579a20,PodSandboxId:cbe8e675b255438d6953d82b2ce141ac3bf4bfb4eb9b99bf5f2856609a06d960,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,State:CONTAINER_RUNNING,CreatedAt:1744648317379828906,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-411768,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 50f9dbe492a9a16ef8bdd576105b9300,},Annotations:map[string]string{io.kubernetes.container.hash: 4c5aaea3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.term
inationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a2df5cd32b3417b29f619c5242ee5090669d94922a7a53db3179ee7e1332cc3,PodSandboxId:7b1c6bea47c8c09d4f6c05df59ad94fecb1f794e9a27db34685a976d59f93aff,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,State:CONTAINER_RUNNING,CreatedAt:1744648317327529934,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-411768,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e5658f9d0e92ff7a619dc5f35f6f2df6,},Annotations:map[string]string{io.kubernetes.container.hash: 7745040f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePol
icy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e3dc87da-52b4-42aa-aa41-d9c85e5f2fb3 name=/runtime.v1.RuntimeService/ListContainers
Apr 14 16:36:28 addons-411768 crio[665]: time="2025-04-14 16:36:28.536609738Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=251ad293-8846-46e0-8751-51fa5c0e7de5 name=/runtime.v1.RuntimeService/Version
Apr 14 16:36:28 addons-411768 crio[665]: time="2025-04-14 16:36:28.536680629Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=251ad293-8846-46e0-8751-51fa5c0e7de5 name=/runtime.v1.RuntimeService/Version
Apr 14 16:36:28 addons-411768 crio[665]: time="2025-04-14 16:36:28.542139252Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=c065b186-ef15-484a-abdd-378433c3a40b name=/runtime.v1.ImageService/ImageFsInfo
Apr 14 16:36:28 addons-411768 crio[665]: time="2025-04-14 16:36:28.543500185Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1744648588543477837,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:604414,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c065b186-ef15-484a-abdd-378433c3a40b name=/runtime.v1.ImageService/ImageFsInfo
Apr 14 16:36:28 addons-411768 crio[665]: time="2025-04-14 16:36:28.544152272Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=6932bd37-f48e-4f0f-b2e9-262c3abdcdfd name=/runtime.v1.RuntimeService/ListContainers
Apr 14 16:36:28 addons-411768 crio[665]: time="2025-04-14 16:36:28.544203834Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=6932bd37-f48e-4f0f-b2e9-262c3abdcdfd name=/runtime.v1.RuntimeService/ListContainers
Apr 14 16:36:28 addons-411768 crio[665]: time="2025-04-14 16:36:28.544593734Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:864bc3bd70a07950a38583f2b3082eb40373b9fcbd2af6feb6147d181d66a10c,PodSandboxId:da1a9986402173c5f2d5aeacb8e485646b508681d0808420010e0152c6bd6873,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1744648588234839801,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-7d9564db4-9xf4s,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c049a27d-8170-46f8-8ed9-29e70b408cdb,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.p
orts: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd6e08abce008c9511b4593999124a65069fa49ea9441c21f99e670c293a1068,PodSandboxId:b990b1184119db432e80765ab867512c16cc942ab1222529874f6ad764768338,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:4ff102c5d78d254a6f0da062b3cf39eaf07f01eec0927fd21e219d0af8bc0591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1ff4bb4faebcfb1f7e01144fa9904a570ab9bab88694457855feb6c6bba3fa07,State:CONTAINER_RUNNING,CreatedAt:1744648450683029678,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c6fdc475-449b-4a8c-a72c-3d42ef531b1c,},Annotations:map[string]string{io.kubernete
s.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ef31858c68b561febaa6af346a0f21d70ca9b2b3765e2305e84d0e21d69fb6d,PodSandboxId:5e0446e945c2909449ffc2b06a012b560037888fdd4e9e4681dd6cd08d9fa4b5,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1744648413638559611,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d2799830-4b53-4013-83
79-64bfa1b342a4,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5d31f652b18cc075da33290ebdcfe04719706fb046161f256ed9b2ac18362871,PodSandboxId:97bf244602bf97382a9da647ec38bbb0c9e835984d41a077a69dc3423faeaca3,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:62b61c42ec8dd877b85c0aa24c4744ce44d274bc16cc5d2364edfe67964ba55b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ee44bc2368033ba6147d84fb376356de1e40e4778c20dd8b4817bd1636121ddf,State:CONTAINER_RUNNING,CreatedAt:1744648406753083055,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-56d7c84fd4-h2flx,io.kubernetes.pod.namespace: ingress-nginx,io
.kubernetes.pod.uid: 58048470-a007-4f79-9b05-cc4fe6169041,},Annotations:map[string]string{io.kubernetes.container.hash: 4e8eee94,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:7999b897d790728b8ef63657c9e4f11780d63187673cd9f0054d4e4aa6b8444f,PodSandboxId:e43e8c82e534431759c14e648906333bd2965fdefb303c24f8176a1402fb2630,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff
8063d2b37b9fd3f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a62eeff05ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,State:CONTAINER_EXITED,CreatedAt:1744648386374515997,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-qtplm,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 56e4893f-e50c-4e07-aa67-5eac91793235,},Annotations:map[string]string{io.kubernetes.container.hash: 3f610496,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ab025d3e20c73a414b15ccf9be6abad1ec88729a22aa7861a906948af8397b6a,PodSandboxId:f1fda8ad102985b1525fc3315de4e8205419dd271ca9bd87830894475e3ac0f7,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbf
bb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a62eeff05ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,State:CONTAINER_EXITED,CreatedAt:1744648383322218841,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-lqjfj,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: e8fa21e9-a3b4-4266-9b5b-5bd2b8518b0b,},Annotations:map[string]string{io.kubernetes.container.hash: fe18a2bf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e305f3d82f4b8539b9c0d7ed4cdf6fac70e369fdd6f0230eee3b9bd5535ab1a2,PodSandboxId:1ae0af87b54a7e3c171ae8ba4c21025bab00e508f1206005eaf9030b091edac2,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher
/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1744648367937333586,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-76f89f99b5-vxlrn,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 123bd649-9c06-4a40-8c9e-88219f0ea2e3,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:72b0e851c40380144991f853f0683e4ad9ceb9e34f4bdff113726c0d58980165,PodSandboxId:6e8c72550080eeaeb065fffd901f77cd8d486c92dee0ae5bb8b0ee4fa28ba039,Metadata:&ContainerMetadata{Name:amd-gp
u-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1744648357458269550,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-5sprs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 36ab44cd-e5cd-47dc-97c9-9b9566809a07,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0e84062959d36d1d67c2b57e182898fa6d4c8c88812627de01b73a6e779bd6be,PodSandboxId:81e1ef9d686de5f615e5a2cadbc014819acaaa42741e55eea96a6a080a6d179b
,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:30dd67412fdea30479de8d5d9bf760870308d24d911c59ea1f1757f04c33cc29,State:CONTAINER_RUNNING,CreatedAt:1744648355908772322,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d52dc595-cef9-487e-9ae1-d5f31774779b,},Annotations:map[string]string{io.kubernetes.container.hash: 8778d474,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:693f2
719e9989206f59406fd47ff21a2f797f42b3d5eef599c90b28543648564,PodSandboxId:40f2526f24fa491b727aaa3eefce3c3983282312c9d3f31cd6f1afea049852e8,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1744648333887348299,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 016d9cef-9f4d-4edc-9108-2b5b76533cc7,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e4e3fb96f0ab62db9
2d5a8f7a2fb9d4ef6e2cffd618c8687aa77a2bcc1d057d7,PodSandboxId:f37590b0584b54ef015afff93bfd7470ebadcaba595881881a89d1152a6edd45,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1744648331230083492,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-4wbtn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: efde3561-f910-4083-a045-d58c8fdcf7f5,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubern
etes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:71aaac2f1ac40ebb047bfc5db4fbac4d0313010afb806284b4155772309d8411,PodSandboxId:7aa9c0248892779b42800bf5da99b04ba2e120c0c4007dd635f40855c8dd750b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5,State:CONTAINER_RUNNING,CreatedAt:1744648328482796000,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-bvpxd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 240f2e9d-199b-4666-8144-1af7bb751178,},Annotations:map[string]string{io.kubernetes.container.hash: b4fecc5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bd9cf5e8aa14acadbac81dfd28f80ab479fe366057d18bfc45fb36632905fc67,PodSandboxId:e3f03985ff064594539048ff54615a10308a18d629901f25b55272bccfdc6c03,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389,State:CONTAINER_RUNNING,CreatedAt:1744648317361852695,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-411768,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 240bc76e760adf0d34e672e8e10bfb1f,},Annotations:map[string]string{io.kubernetes.container.hash: 51692d3d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminati
onMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bbc1ab888a3c6f4a225e1a7f6b60416c25be2ffe945a16322fb6ee42d1623769,PodSandboxId:3419ef801ad4e6e1e5b0c26d90bccfa60f2a7898a8b8462f9d6cc36f00ee6802,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1744648317402210134,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-411768,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8827044f306ba1d367ed9bf7b6d0c8db,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernet
es.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:23453a182488dc6f3e93ca6d056ec74f483a6b13d78ea67e71d08d2d45579a20,PodSandboxId:cbe8e675b255438d6953d82b2ce141ac3bf4bfb4eb9b99bf5f2856609a06d960,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d,State:CONTAINER_RUNNING,CreatedAt:1744648317379828906,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-411768,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 50f9dbe492a9a16ef8bdd576105b9300,},Annotations:map[string]string{io.kubernetes.container.hash: 4c5aaea3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.term
inationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a2df5cd32b3417b29f619c5242ee5090669d94922a7a53db3179ee7e1332cc3,PodSandboxId:7b1c6bea47c8c09d4f6c05df59ad94fecb1f794e9a27db34685a976d59f93aff,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef,State:CONTAINER_RUNNING,CreatedAt:1744648317327529934,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-411768,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e5658f9d0e92ff7a619dc5f35f6f2df6,},Annotations:map[string]string{io.kubernetes.container.hash: 7745040f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePol
icy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=6932bd37-f48e-4f0f-b2e9-262c3abdcdfd name=/runtime.v1.RuntimeService/ListContainers
==> container status <==
CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID POD
864bc3bd70a07 docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6 Less than a second ago Running hello-world-app 0 da1a998640217 hello-world-app-7d9564db4-9xf4s
cd6e08abce008 docker.io/library/nginx@sha256:4ff102c5d78d254a6f0da062b3cf39eaf07f01eec0927fd21e219d0af8bc0591 2 minutes ago Running nginx 0 b990b1184119d nginx
9ef31858c68b5 gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e 2 minutes ago Running busybox 0 5e0446e945c29 busybox
5d31f652b18cc registry.k8s.io/ingress-nginx/controller@sha256:62b61c42ec8dd877b85c0aa24c4744ce44d274bc16cc5d2364edfe67964ba55b 3 minutes ago Running controller 0 97bf244602bf9 ingress-nginx-controller-56d7c84fd4-h2flx
7999b897d7907 registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f 3 minutes ago Exited patch 0 e43e8c82e5344 ingress-nginx-admission-patch-qtplm
ab025d3e20c73 registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f 3 minutes ago Exited create 0 f1fda8ad10298 ingress-nginx-admission-create-lqjfj
e305f3d82f4b8 docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef 3 minutes ago Running local-path-provisioner 0 1ae0af87b54a7 local-path-provisioner-76f89f99b5-vxlrn
72b0e851c4038 docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f 3 minutes ago Running amd-gpu-device-plugin 0 6e8c72550080e amd-gpu-device-plugin-5sprs
0e84062959d36 gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab 3 minutes ago Running minikube-ingress-dns 0 81e1ef9d686de kube-ingress-dns-minikube
693f2719e9989 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562 4 minutes ago Running storage-provisioner 0 40f2526f24fa4 storage-provisioner
e4e3fb96f0ab6 c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6 4 minutes ago Running coredns 0 f37590b0584b5 coredns-668d6bf9bc-4wbtn
71aaac2f1ac40 f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5 4 minutes ago Running kube-proxy 0 7aa9c02488927 kube-proxy-bvpxd
bbc1ab888a3c6 a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc 4 minutes ago Running etcd 0 3419ef801ad4e etcd-addons-411768
23453a182488d d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d 4 minutes ago Running kube-scheduler 0 cbe8e675b2554 kube-scheduler-addons-411768
bd9cf5e8aa14a b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389 4 minutes ago Running kube-controller-manager 0 e3f03985ff064 kube-controller-manager-addons-411768
5a2df5cd32b34 85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef 4 minutes ago Running kube-apiserver 0 7b1c6bea47c8c kube-apiserver-addons-411768
==> coredns [e4e3fb96f0ab62db92d5a8f7a2fb9d4ef6e2cffd618c8687aa77a2bcc1d057d7] <==
[INFO] 10.244.0.8:56648 - 31824 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 109 false 1232" NXDOMAIN qr,aa,rd 179 0.000434441s
[INFO] 10.244.0.8:56648 - 52939 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000086021s
[INFO] 10.244.0.8:56648 - 5146 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000062888s
[INFO] 10.244.0.8:56648 - 54071 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000055036s
[INFO] 10.244.0.8:56648 - 19825 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000201205s
[INFO] 10.244.0.8:56648 - 58145 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.00026493s
[INFO] 10.244.0.8:56648 - 6063 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000127614s
[INFO] 10.244.0.8:34023 - 63651 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000133459s
[INFO] 10.244.0.8:34023 - 63359 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000188386s
[INFO] 10.244.0.8:51734 - 42848 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.00014494s
[INFO] 10.244.0.8:51734 - 42598 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000206876s
[INFO] 10.244.0.8:42950 - 12729 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000090486s
[INFO] 10.244.0.8:42950 - 12955 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000266683s
[INFO] 10.244.0.8:48945 - 24007 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000070332s
[INFO] 10.244.0.8:48945 - 23788 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000051702s
[INFO] 10.244.0.23:39740 - 57396 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.00046847s
[INFO] 10.244.0.23:58503 - 7959 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000136447s
[INFO] 10.244.0.23:55707 - 17757 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000271569s
[INFO] 10.244.0.23:58233 - 12356 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000059819s
[INFO] 10.244.0.23:33554 - 27500 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000114099s
[INFO] 10.244.0.23:45809 - 47241 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000123692s
[INFO] 10.244.0.23:47122 - 16421 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 458 0.004433435s
[INFO] 10.244.0.23:46706 - 13978 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.005480704s
[INFO] 10.244.0.26:41163 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000287036s
[INFO] 10.244.0.26:43704 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000092961s
==> describe nodes <==
Name: addons-411768
Roles: control-plane
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/arch=amd64
kubernetes.io/hostname=addons-411768
kubernetes.io/os=linux
minikube.k8s.io/commit=f1e69a1cd498979c80dbe968253c827f6eb2cf37
minikube.k8s.io/name=addons-411768
minikube.k8s.io/primary=true
minikube.k8s.io/updated_at=2025_04_14T16_32_02_0700
minikube.k8s.io/version=v1.35.0
node-role.kubernetes.io/control-plane=
node.kubernetes.io/exclude-from-external-load-balancers=
topology.hostpath.csi/node=addons-411768
Annotations: kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Mon, 14 Apr 2025 16:31:59 +0000
Taints: <none>
Unschedulable: false
Lease:
HolderIdentity: addons-411768
AcquireTime: <unset>
RenewTime: Mon, 14 Apr 2025 16:36:27 +0000
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
MemoryPressure False Mon, 14 Apr 2025 16:34:45 +0000 Mon, 14 Apr 2025 16:31:58 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Mon, 14 Apr 2025 16:34:45 +0000 Mon, 14 Apr 2025 16:31:58 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Mon, 14 Apr 2025 16:34:45 +0000 Mon, 14 Apr 2025 16:31:58 +0000 KubeletHasSufficientPID kubelet has sufficient PID available
Ready True Mon, 14 Apr 2025 16:34:45 +0000 Mon, 14 Apr 2025 16:32:03 +0000 KubeletReady kubelet is posting ready status
Addresses:
InternalIP: 192.168.39.237
Hostname: addons-411768
Capacity:
cpu: 2
ephemeral-storage: 17734596Ki
hugepages-2Mi: 0
memory: 3912780Ki
pods: 110
Allocatable:
cpu: 2
ephemeral-storage: 17734596Ki
hugepages-2Mi: 0
memory: 3912780Ki
pods: 110
System Info:
Machine ID: 0122c2f2036548bba2ce55793f70c87e
System UUID: 0122c2f2-0365-48bb-a2ce-55793f70c87e
Boot ID: ad8475cb-76ab-45b0-801e-128a5aaf00b5
Kernel Version: 5.10.207
OS Image: Buildroot 2023.02.9
Operating System: linux
Architecture: amd64
Container Runtime Version: cri-o://1.29.1
Kubelet Version: v1.32.2
Kube-Proxy Version: v1.32.2
PodCIDR: 10.244.0.0/24
PodCIDRs: 10.244.0.0/24
Non-terminated Pods: (14 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age
--------- ---- ------------ ---------- --------------- ------------- ---
default busybox 0 (0%) 0 (0%) 0 (0%) 0 (0%) 2m58s
default hello-world-app-7d9564db4-9xf4s 0 (0%) 0 (0%) 0 (0%) 0 (0%) 2s
default nginx 0 (0%) 0 (0%) 0 (0%) 0 (0%) 2m21s
ingress-nginx ingress-nginx-controller-56d7c84fd4-h2flx 100m (5%) 0 (0%) 90Mi (2%) 0 (0%) 4m13s
kube-system amd-gpu-device-plugin-5sprs 0 (0%) 0 (0%) 0 (0%) 0 (0%) 4m19s
kube-system coredns-668d6bf9bc-4wbtn 100m (5%) 0 (0%) 70Mi (1%) 170Mi (4%) 4m21s
kube-system etcd-addons-411768 100m (5%) 0 (0%) 100Mi (2%) 0 (0%) 4m26s
kube-system kube-apiserver-addons-411768 250m (12%) 0 (0%) 0 (0%) 0 (0%) 4m26s
kube-system kube-controller-manager-addons-411768 200m (10%) 0 (0%) 0 (0%) 0 (0%) 4m27s
kube-system kube-ingress-dns-minikube 0 (0%) 0 (0%) 0 (0%) 0 (0%) 4m16s
kube-system kube-proxy-bvpxd 0 (0%) 0 (0%) 0 (0%) 0 (0%) 4m21s
kube-system kube-scheduler-addons-411768 100m (5%) 0 (0%) 0 (0%) 0 (0%) 4m26s
kube-system storage-provisioner 0 (0%) 0 (0%) 0 (0%) 0 (0%) 4m16s
local-path-storage local-path-provisioner-76f89f99b5-vxlrn 0 (0%) 0 (0%) 0 (0%) 0 (0%) 4m16s
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 850m (42%) 0 (0%)
memory 260Mi (6%) 170Mi (4%)
ephemeral-storage 0 (0%) 0 (0%)
hugepages-2Mi 0 (0%) 0 (0%)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Starting 4m19s kube-proxy
Normal Starting 4m26s kubelet Starting kubelet.
Normal NodeAllocatableEnforced 4m26s kubelet Updated Node Allocatable limit across pods
Normal NodeHasSufficientMemory 4m26s kubelet Node addons-411768 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 4m26s kubelet Node addons-411768 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 4m26s kubelet Node addons-411768 status is now: NodeHasSufficientPID
Normal NodeReady 4m25s kubelet Node addons-411768 status is now: NodeReady
Normal RegisteredNode 4m22s node-controller Node addons-411768 event: Registered Node addons-411768 in Controller
==> dmesg <==
[Apr14 16:32] systemd-fstab-generator[1231]: Ignoring "noauto" option for root device
[ +0.075623] kauditd_printk_skb: 69 callbacks suppressed
[ +5.282672] systemd-fstab-generator[1354]: Ignoring "noauto" option for root device
[ +0.120699] kauditd_printk_skb: 21 callbacks suppressed
[ +5.002509] kauditd_printk_skb: 108 callbacks suppressed
[ +5.018593] kauditd_printk_skb: 128 callbacks suppressed
[ +11.897243] kauditd_printk_skb: 95 callbacks suppressed
[ +15.354401] kauditd_printk_skb: 7 callbacks suppressed
[ +7.021916] kauditd_printk_skb: 11 callbacks suppressed
[ +5.347674] kauditd_printk_skb: 2 callbacks suppressed
[Apr14 16:33] kauditd_printk_skb: 32 callbacks suppressed
[ +5.806465] kauditd_printk_skb: 41 callbacks suppressed
[ +6.546054] kauditd_printk_skb: 30 callbacks suppressed
[ +9.179997] kauditd_printk_skb: 8 callbacks suppressed
[ +5.198527] kauditd_printk_skb: 16 callbacks suppressed
[ +8.985718] kauditd_printk_skb: 9 callbacks suppressed
[ +11.522411] kauditd_printk_skb: 2 callbacks suppressed
[ +5.272183] kauditd_printk_skb: 6 callbacks suppressed
[Apr14 16:34] kauditd_printk_skb: 23 callbacks suppressed
[ +5.577447] kauditd_printk_skb: 30 callbacks suppressed
[ +6.125514] kauditd_printk_skb: 64 callbacks suppressed
[ +6.023287] kauditd_printk_skb: 43 callbacks suppressed
[ +5.271265] kauditd_printk_skb: 27 callbacks suppressed
[ +7.483009] kauditd_printk_skb: 15 callbacks suppressed
[Apr14 16:36] kauditd_printk_skb: 49 callbacks suppressed
==> etcd [bbc1ab888a3c6f4a225e1a7f6b60416c25be2ffe945a16322fb6ee42d1623769] <==
{"level":"info","ts":"2025-04-14T16:33:23.968099Z","caller":"traceutil/trace.go:171","msg":"trace[938940998] linearizableReadLoop","detail":"{readStateIndex:1129; appliedIndex:1128; }","duration":"141.897316ms","start":"2025-04-14T16:33:23.826177Z","end":"2025-04-14T16:33:23.968074Z","steps":["trace[938940998] 'read index received' (duration: 141.451789ms)","trace[938940998] 'applied index is now lower than readState.Index' (duration: 444.829µs)"],"step_count":2}
{"level":"info","ts":"2025-04-14T16:33:23.968636Z","caller":"traceutil/trace.go:171","msg":"trace[1893541846] transaction","detail":"{read_only:false; response_revision:1095; number_of_response:1; }","duration":"189.047518ms","start":"2025-04-14T16:33:23.779569Z","end":"2025-04-14T16:33:23.968616Z","steps":["trace[1893541846] 'process raft request' (duration: 188.174025ms)"],"step_count":1}
{"level":"warn","ts":"2025-04-14T16:33:23.968723Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"142.539694ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
{"level":"info","ts":"2025-04-14T16:33:23.970899Z","caller":"traceutil/trace.go:171","msg":"trace[1145160022] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1095; }","duration":"144.749538ms","start":"2025-04-14T16:33:23.826133Z","end":"2025-04-14T16:33:23.970882Z","steps":["trace[1145160022] 'agreement among raft nodes before linearized reading' (duration: 142.531609ms)"],"step_count":1}
{"level":"warn","ts":"2025-04-14T16:33:23.971624Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"111.107161ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
{"level":"info","ts":"2025-04-14T16:33:23.973052Z","caller":"traceutil/trace.go:171","msg":"trace[1761018362] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1095; }","duration":"112.553145ms","start":"2025-04-14T16:33:23.860486Z","end":"2025-04-14T16:33:23.973039Z","steps":["trace[1761018362] 'agreement among raft nodes before linearized reading' (duration: 110.901558ms)"],"step_count":1}
{"level":"info","ts":"2025-04-14T16:33:26.353516Z","caller":"traceutil/trace.go:171","msg":"trace[431653975] transaction","detail":"{read_only:false; response_revision:1098; number_of_response:1; }","duration":"203.932373ms","start":"2025-04-14T16:33:26.149565Z","end":"2025-04-14T16:33:26.353498Z","steps":["trace[431653975] 'process raft request' (duration: 203.562072ms)"],"step_count":1}
{"level":"info","ts":"2025-04-14T16:33:26.602861Z","caller":"traceutil/trace.go:171","msg":"trace[655271812] linearizableReadLoop","detail":"{readStateIndex:1133; appliedIndex:1132; }","duration":"172.728115ms","start":"2025-04-14T16:33:26.430118Z","end":"2025-04-14T16:33:26.602846Z","steps":["trace[655271812] 'read index received' (duration: 166.415679ms)","trace[655271812] 'applied index is now lower than readState.Index' (duration: 6.311694ms)"],"step_count":2}
{"level":"warn","ts":"2025-04-14T16:33:26.602955Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"172.835975ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
{"level":"info","ts":"2025-04-14T16:33:26.602974Z","caller":"traceutil/trace.go:171","msg":"trace[847519126] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1098; }","duration":"172.889492ms","start":"2025-04-14T16:33:26.430079Z","end":"2025-04-14T16:33:26.602969Z","steps":["trace[847519126] 'agreement among raft nodes before linearized reading' (duration: 172.83735ms)"],"step_count":1}
{"level":"info","ts":"2025-04-14T16:33:57.046717Z","caller":"traceutil/trace.go:171","msg":"trace[1567630474] linearizableReadLoop","detail":"{readStateIndex:1346; appliedIndex:1345; }","duration":"185.807875ms","start":"2025-04-14T16:33:56.860871Z","end":"2025-04-14T16:33:57.046679Z","steps":["trace[1567630474] 'read index received' (duration: 185.63865ms)","trace[1567630474] 'applied index is now lower than readState.Index' (duration: 168.572µs)"],"step_count":2}
{"level":"warn","ts":"2025-04-14T16:33:57.046976Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"186.065185ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
{"level":"info","ts":"2025-04-14T16:33:57.047088Z","caller":"traceutil/trace.go:171","msg":"trace[301941662] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1302; }","duration":"186.184667ms","start":"2025-04-14T16:33:56.860865Z","end":"2025-04-14T16:33:57.047049Z","steps":["trace[301941662] 'agreement among raft nodes before linearized reading' (duration: 185.993502ms)"],"step_count":1}
{"level":"info","ts":"2025-04-14T16:33:57.047832Z","caller":"traceutil/trace.go:171","msg":"trace[133768102] transaction","detail":"{read_only:false; response_revision:1302; number_of_response:1; }","duration":"294.896315ms","start":"2025-04-14T16:33:56.752921Z","end":"2025-04-14T16:33:57.047817Z","steps":["trace[133768102] 'process raft request' (duration: 293.628605ms)"],"step_count":1}
{"level":"warn","ts":"2025-04-14T16:33:57.047230Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"132.58117ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/persistentvolumeclaims/default/hpvc\" limit:1 ","response":"range_response_count:1 size:822"}
{"level":"info","ts":"2025-04-14T16:33:57.048652Z","caller":"traceutil/trace.go:171","msg":"trace[1666697132] range","detail":"{range_begin:/registry/persistentvolumeclaims/default/hpvc; range_end:; response_count:1; response_revision:1302; }","duration":"134.053193ms","start":"2025-04-14T16:33:56.914585Z","end":"2025-04-14T16:33:57.048638Z","steps":["trace[1666697132] 'agreement among raft nodes before linearized reading' (duration: 132.513931ms)"],"step_count":1}
{"level":"info","ts":"2025-04-14T16:34:26.836379Z","caller":"traceutil/trace.go:171","msg":"trace[2117610398] linearizableReadLoop","detail":"{readStateIndex:1665; appliedIndex:1664; }","duration":"236.383176ms","start":"2025-04-14T16:34:26.599980Z","end":"2025-04-14T16:34:26.836363Z","steps":["trace[2117610398] 'read index received' (duration: 234.762148ms)","trace[2117610398] 'applied index is now lower than readState.Index' (duration: 1.62031ms)"],"step_count":2}
{"level":"warn","ts":"2025-04-14T16:34:26.836598Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"236.595853ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/persistentvolumeclaims/default/hpvc-restore\" limit:1 ","response":"range_response_count:1 size:1594"}
{"level":"info","ts":"2025-04-14T16:34:26.837143Z","caller":"traceutil/trace.go:171","msg":"trace[2012354821] range","detail":"{range_begin:/registry/persistentvolumeclaims/default/hpvc-restore; range_end:; response_count:1; response_revision:1605; }","duration":"237.175837ms","start":"2025-04-14T16:34:26.599959Z","end":"2025-04-14T16:34:26.837135Z","steps":["trace[2012354821] 'agreement among raft nodes before linearized reading' (duration: 236.536431ms)"],"step_count":1}
{"level":"warn","ts":"2025-04-14T16:34:26.836829Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"202.712358ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" limit:1 ","response":"range_response_count:1 size:1113"}
{"level":"info","ts":"2025-04-14T16:34:26.837283Z","caller":"traceutil/trace.go:171","msg":"trace[1401235009] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:1605; }","duration":"203.184965ms","start":"2025-04-14T16:34:26.634091Z","end":"2025-04-14T16:34:26.837276Z","steps":["trace[1401235009] 'agreement among raft nodes before linearized reading' (duration: 202.692748ms)"],"step_count":1}
{"level":"warn","ts":"2025-04-14T16:34:26.837014Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"105.401219ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/jobs/\" range_end:\"/registry/jobs0\" count_only:true ","response":"range_response_count:0 size:7"}
{"level":"info","ts":"2025-04-14T16:34:26.837475Z","caller":"traceutil/trace.go:171","msg":"trace[753201394] range","detail":"{range_begin:/registry/jobs/; range_end:/registry/jobs0; response_count:0; response_revision:1605; }","duration":"105.882396ms","start":"2025-04-14T16:34:26.731584Z","end":"2025-04-14T16:34:26.837467Z","steps":["trace[753201394] 'agreement among raft nodes before linearized reading' (duration: 105.408369ms)"],"step_count":1}
{"level":"warn","ts":"2025-04-14T16:34:26.837042Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"159.150068ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
{"level":"info","ts":"2025-04-14T16:34:26.838299Z","caller":"traceutil/trace.go:171","msg":"trace[202009545] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1605; }","duration":"160.427821ms","start":"2025-04-14T16:34:26.677863Z","end":"2025-04-14T16:34:26.838291Z","steps":["trace[202009545] 'agreement among raft nodes before linearized reading' (duration: 159.161943ms)"],"step_count":1}
==> kernel <==
16:36:28 up 5 min, 0 users, load average: 1.87, 1.59, 0.76
Linux addons-411768 5.10.207 #1 SMP Tue Jan 14 08:15:54 UTC 2025 x86_64 GNU/Linux
PRETTY_NAME="Buildroot 2023.02.9"
==> kube-apiserver [5a2df5cd32b3417b29f619c5242ee5090669d94922a7a53db3179ee7e1332cc3] <==
E0414 16:32:48.752106 1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.101.147.230:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.101.147.230:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.101.147.230:443: connect: connection refused" logger="UnhandledError"
I0414 16:32:48.813462 1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
E0414 16:33:41.188330 1 conn.go:339] Error on socket receive: read tcp 192.168.39.237:8443->192.168.39.1:34348: use of closed network connection
E0414 16:33:41.371979 1 conn.go:339] Error on socket receive: read tcp 192.168.39.237:8443->192.168.39.1:34368: use of closed network connection
I0414 16:33:50.579236 1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.109.88.251"}
I0414 16:34:01.527282 1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
W0414 16:34:02.561160 1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
I0414 16:34:07.310110 1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
I0414 16:34:07.587278 1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.109.200.21"}
I0414 16:34:16.090874 1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
I0414 16:34:33.459998 1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
I0414 16:34:33.460268 1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
I0414 16:34:33.552764 1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
I0414 16:34:33.552837 1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
I0414 16:34:33.562107 1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
I0414 16:34:33.562200 1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
I0414 16:34:33.578159 1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
I0414 16:34:33.578217 1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
I0414 16:34:33.616937 1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
I0414 16:34:33.616982 1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
W0414 16:34:34.578296 1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
W0414 16:34:34.621320 1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
W0414 16:34:34.628135 1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
I0414 16:34:49.757540 1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
I0414 16:36:27.151139 1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.100.173.71"}
==> kube-controller-manager [bd9cf5e8aa14acadbac81dfd28f80ab479fe366057d18bfc45fb36632905fc67] <==
E0414 16:35:18.690132 1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
W0414 16:35:44.444009 1 reflector.go:362] The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking, err = the server could not find the requested resource
E0414 16:35:44.445012 1 metadata.go:231] "The watchlist request ended with an error, falling back to the standard LIST semantics" err="the server could not find the requested resource" resource="snapshot.storage.k8s.io/v1, Resource=volumesnapshotclasses"
W0414 16:35:44.445910 1 reflector.go:569] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0414 16:35:44.445973 1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
W0414 16:35:44.609952 1 reflector.go:362] The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking, err = the server could not find the requested resource
E0414 16:35:44.610773 1 metadata.go:231] "The watchlist request ended with an error, falling back to the standard LIST semantics" err="the server could not find the requested resource" resource="snapshot.storage.k8s.io/v1, Resource=volumesnapshotcontents"
W0414 16:35:44.611703 1 reflector.go:569] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0414 16:35:44.611746 1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
W0414 16:35:59.312792 1 reflector.go:362] The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking, err = the server could not find the requested resource
E0414 16:35:59.313936 1 metadata.go:231] "The watchlist request ended with an error, falling back to the standard LIST semantics" err="the server could not find the requested resource" resource="snapshot.storage.k8s.io/v1, Resource=volumesnapshots"
W0414 16:35:59.314866 1 reflector.go:569] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0414 16:35:59.314904 1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
W0414 16:36:00.060140 1 reflector.go:362] The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking, err = the server could not find the requested resource
E0414 16:36:00.061129 1 metadata.go:231] "The watchlist request ended with an error, falling back to the standard LIST semantics" err="the server could not find the requested resource" resource="gadget.kinvolk.io/v1alpha1, Resource=traces"
W0414 16:36:00.061966 1 reflector.go:569] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0414 16:36:00.062009 1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
I0414 16:36:26.955059 1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-7d9564db4" duration="28.076569ms"
I0414 16:36:26.973778 1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-7d9564db4" duration="18.625163ms"
I0414 16:36:26.973899 1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-7d9564db4" duration="26.378µs"
I0414 16:36:26.980820 1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-7d9564db4" duration="71.272µs"
W0414 16:36:27.438947 1 reflector.go:362] The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking, err = the server could not find the requested resource
E0414 16:36:27.439839 1 metadata.go:231] "The watchlist request ended with an error, falling back to the standard LIST semantics" err="the server could not find the requested resource" resource="snapshot.storage.k8s.io/v1, Resource=volumesnapshotclasses"
W0414 16:36:27.441524 1 reflector.go:569] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0414 16:36:27.441557 1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
==> kube-proxy [71aaac2f1ac40ebb047bfc5db4fbac4d0313010afb806284b4155772309d8411] <==
add table ip kube-proxy
^^^^^^^^^^^^^^^^^^^^^^^^
>
E0414 16:32:09.679906 1 proxier.go:733] "Error cleaning up nftables rules" err=<
could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
add table ip6 kube-proxy
^^^^^^^^^^^^^^^^^^^^^^^^^
>
I0414 16:32:09.691323 1 server.go:698] "Successfully retrieved node IP(s)" IPs=["192.168.39.237"]
E0414 16:32:09.691377 1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
I0414 16:32:09.773916 1 server_linux.go:147] "No iptables support for family" ipFamily="IPv6"
I0414 16:32:09.773949 1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
I0414 16:32:09.773971 1 server_linux.go:170] "Using iptables Proxier"
I0414 16:32:09.776292 1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
I0414 16:32:09.776548 1 server.go:497] "Version info" version="v1.32.2"
I0414 16:32:09.776560 1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I0414 16:32:09.780192 1 config.go:199] "Starting service config controller"
I0414 16:32:09.780216 1 shared_informer.go:313] Waiting for caches to sync for service config
I0414 16:32:09.780239 1 config.go:105] "Starting endpoint slice config controller"
I0414 16:32:09.780243 1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
I0414 16:32:09.780772 1 config.go:329] "Starting node config controller"
I0414 16:32:09.780779 1 shared_informer.go:313] Waiting for caches to sync for node config
I0414 16:32:09.880529 1 shared_informer.go:320] Caches are synced for endpoint slice config
I0414 16:32:09.880572 1 shared_informer.go:320] Caches are synced for service config
I0414 16:32:09.880829 1 shared_informer.go:320] Caches are synced for node config
==> kube-scheduler [23453a182488dc6f3e93ca6d056ec74f483a6b13d78ea67e71d08d2d45579a20] <==
W0414 16:31:59.776014 1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
E0414 16:31:59.776024 1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
W0414 16:31:59.776068 1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
E0414 16:31:59.776097 1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
W0414 16:31:59.776131 1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
E0414 16:31:59.776142 1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
W0414 16:32:00.586860 1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
E0414 16:32:00.586922 1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
W0414 16:32:00.708461 1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
E0414 16:32:00.708510 1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
W0414 16:32:00.746525 1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
E0414 16:32:00.746576 1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
W0414 16:32:00.825552 1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
E0414 16:32:00.826210 1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
W0414 16:32:00.857572 1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
E0414 16:32:00.857618 1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
W0414 16:32:00.860214 1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
E0414 16:32:00.860272 1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
W0414 16:32:00.883621 1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "volumeattachments" in API group "storage.k8s.io" at the cluster scope
E0414 16:32:00.883666 1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.VolumeAttachment: failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
W0414 16:32:00.893741 1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
E0414 16:32:00.893784 1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
W0414 16:32:00.916557 1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
E0414 16:32:00.916606 1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
I0414 16:32:01.370664 1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
==> kubelet <==
Apr 14 16:36:02 addons-411768 kubelet[1238]: E0414 16:36:02.124851 1238 iptables.go:577] "Could not set up iptables canary" err=<
Apr 14 16:36:02 addons-411768 kubelet[1238]: error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
Apr 14 16:36:02 addons-411768 kubelet[1238]: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
Apr 14 16:36:02 addons-411768 kubelet[1238]: Perhaps ip6tables or your kernel needs to be upgraded.
Apr 14 16:36:02 addons-411768 kubelet[1238]: > table="nat" chain="KUBE-KUBELET-CANARY"
Apr 14 16:36:02 addons-411768 kubelet[1238]: E0414 16:36:02.545083 1238 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1744648562544657654,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595808,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
Apr 14 16:36:02 addons-411768 kubelet[1238]: E0414 16:36:02.545107 1238 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1744648562544657654,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595808,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
Apr 14 16:36:12 addons-411768 kubelet[1238]: E0414 16:36:12.547068 1238 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1744648572546809660,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595808,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
Apr 14 16:36:12 addons-411768 kubelet[1238]: E0414 16:36:12.547108 1238 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1744648572546809660,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595808,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
Apr 14 16:36:22 addons-411768 kubelet[1238]: E0414 16:36:22.550160 1238 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1744648582549852290,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595808,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
Apr 14 16:36:22 addons-411768 kubelet[1238]: E0414 16:36:22.550210 1238 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1744648582549852290,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595808,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
Apr 14 16:36:26 addons-411768 kubelet[1238]: I0414 16:36:26.952842 1238 memory_manager.go:355] "RemoveStaleState removing state" podUID="b4e6a15d-c481-4c65-8460-c1e3cd4fd26a" containerName="node-driver-registrar"
Apr 14 16:36:26 addons-411768 kubelet[1238]: I0414 16:36:26.953276 1238 memory_manager.go:355] "RemoveStaleState removing state" podUID="08f31c66-3f2c-442e-bed1-d74113220a4c" containerName="volume-snapshot-controller"
Apr 14 16:36:26 addons-411768 kubelet[1238]: I0414 16:36:26.953362 1238 memory_manager.go:355] "RemoveStaleState removing state" podUID="ae23e104-95da-40ae-80b9-0400fb264d20" containerName="volume-snapshot-controller"
Apr 14 16:36:26 addons-411768 kubelet[1238]: I0414 16:36:26.953394 1238 memory_manager.go:355] "RemoveStaleState removing state" podUID="b4e6a15d-c481-4c65-8460-c1e3cd4fd26a" containerName="hostpath"
Apr 14 16:36:26 addons-411768 kubelet[1238]: I0414 16:36:26.953527 1238 memory_manager.go:355] "RemoveStaleState removing state" podUID="b4e6a15d-c481-4c65-8460-c1e3cd4fd26a" containerName="liveness-probe"
Apr 14 16:36:26 addons-411768 kubelet[1238]: I0414 16:36:26.953561 1238 memory_manager.go:355] "RemoveStaleState removing state" podUID="b4e6a15d-c481-4c65-8460-c1e3cd4fd26a" containerName="csi-provisioner"
Apr 14 16:36:26 addons-411768 kubelet[1238]: I0414 16:36:26.953645 1238 memory_manager.go:355] "RemoveStaleState removing state" podUID="1585fae8-d827-4996-8f81-6d06a66b84ee" containerName="cloud-spanner-emulator"
Apr 14 16:36:26 addons-411768 kubelet[1238]: I0414 16:36:26.953682 1238 memory_manager.go:355] "RemoveStaleState removing state" podUID="b4e6a15d-c481-4c65-8460-c1e3cd4fd26a" containerName="csi-snapshotter"
Apr 14 16:36:26 addons-411768 kubelet[1238]: I0414 16:36:26.953765 1238 memory_manager.go:355] "RemoveStaleState removing state" podUID="1c5ebede-4ffc-4554-98e8-6b877134818e" containerName="csi-resizer"
Apr 14 16:36:26 addons-411768 kubelet[1238]: I0414 16:36:26.953800 1238 memory_manager.go:355] "RemoveStaleState removing state" podUID="e5a2d34f-7429-47b0-9239-917c6907123c" containerName="nvidia-device-plugin-ctr"
Apr 14 16:36:26 addons-411768 kubelet[1238]: I0414 16:36:26.953888 1238 memory_manager.go:355] "RemoveStaleState removing state" podUID="b4e6a15d-c481-4c65-8460-c1e3cd4fd26a" containerName="csi-external-health-monitor-controller"
Apr 14 16:36:26 addons-411768 kubelet[1238]: I0414 16:36:26.953920 1238 memory_manager.go:355] "RemoveStaleState removing state" podUID="779860dd-6f16-40ee-a078-aa1f4dd024cb" containerName="task-pv-container"
Apr 14 16:36:26 addons-411768 kubelet[1238]: I0414 16:36:26.954004 1238 memory_manager.go:355] "RemoveStaleState removing state" podUID="ed55eafd-36ee-4183-9d67-d584935ba068" containerName="csi-attacher"
Apr 14 16:36:27 addons-411768 kubelet[1238]: I0414 16:36:27.010201 1238 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4h92n\" (UniqueName: \"kubernetes.io/projected/c049a27d-8170-46f8-8ed9-29e70b408cdb-kube-api-access-4h92n\") pod \"hello-world-app-7d9564db4-9xf4s\" (UID: \"c049a27d-8170-46f8-8ed9-29e70b408cdb\") " pod="default/hello-world-app-7d9564db4-9xf4s"
==> storage-provisioner [693f2719e9989206f59406fd47ff21a2f797f42b3d5eef599c90b28543648564] <==
I0414 16:32:14.574029 1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
I0414 16:32:14.623936 1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
I0414 16:32:14.623991 1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
I0414 16:32:14.644293 1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
I0414 16:32:14.645063 1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-411768_e0494113-686a-45d6-952b-08fb9770703b!
I0414 16:32:14.646931 1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"6dd6cea6-55be-4fcc-9f09-dedf5a4b05e4", APIVersion:"v1", ResourceVersion:"636", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-411768_e0494113-686a-45d6-952b-08fb9770703b became leader
I0414 16:32:14.746638 1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-411768_e0494113-686a-45d6-952b-08fb9770703b!
-- /stdout --
helpers_test.go:254: (dbg) Run: out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-411768 -n addons-411768
helpers_test.go:261: (dbg) Run: kubectl --context addons-411768 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: ingress-nginx-admission-create-lqjfj ingress-nginx-admission-patch-qtplm
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:277: (dbg) Run: kubectl --context addons-411768 describe pod ingress-nginx-admission-create-lqjfj ingress-nginx-admission-patch-qtplm
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-411768 describe pod ingress-nginx-admission-create-lqjfj ingress-nginx-admission-patch-qtplm: exit status 1 (53.641652ms)
** stderr **
Error from server (NotFound): pods "ingress-nginx-admission-create-lqjfj" not found
Error from server (NotFound): pods "ingress-nginx-admission-patch-qtplm" not found
** /stderr **
helpers_test.go:279: kubectl --context addons-411768 describe pod ingress-nginx-admission-create-lqjfj ingress-nginx-admission-patch-qtplm: exit status 1
addons_test.go:992: (dbg) Run: out/minikube-linux-amd64 -p addons-411768 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-411768 addons disable ingress-dns --alsologtostderr -v=1: (1.057950232s)
addons_test.go:992: (dbg) Run: out/minikube-linux-amd64 -p addons-411768 addons disable ingress --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-411768 addons disable ingress --alsologtostderr -v=1: (7.674564648s)
--- FAIL: TestAddons/parallel/Ingress (151.35s)